Daily Archives: July 3, 2019

Compromise by Proxy? Why You Should Be Losing Sleep Tonight

If you’ve heard of the medical bill collector American Medical Collections Agency (AMCA), it’s probably not because you saw an ad on TV. Most likely you heard about its supernova-level mismanagement of cybersecurity, or you read that, as a consequence, the company filed for Chapter 11 bankruptcy protection,

The AMCA breach affected as many as 20 million consumers. The situation at this third- and sometimes fourth-party debt collection agency was ongoing. It affected at least five different labs: Quest Diagnostics, LabCorp, BioReference Laboratories, Carecentrix, and Sunrise Laboratories. The companies used AMCA as their customer bill payment portal.

During the eight months the vulnerability was unaddressed by AMCA, hackers had access to the company’s online payment page, and with that a cornucopia of sensitive personally identifiable information that included financial data, Social Security numbers, and, in one case, medical information.

Lamentable, Avoidable, Illegal and Expensive

This epic cybersecurity fail was avoidable. The AMCA breach was not only a failure to protect the millions of consumers whose data was exposed. It may be the result of AMCA’s failure to comply with HIPAA legislation.

We need to get a little granular here. As a third-party vendor to a HIPAA covered entity, AMCA would almost certainly be subject to the requirements of the HIPAA Privacy, Security, Enforcement, and Breach Notification Rules. According to the U.S. Department of Health and Human Services representative I contacted by email, a medical bill collector is a business associate if it receives, creates, maintains, or transmits protected health information on behalf of the covered entity for a covered function, such as seeking to obtain payment for a medical bill. Among many requirements, such business associates are directly liable for “failure to provide breach notification to a covered entity or another business associate” and “failure to take reasonable steps to address a material breach or violation of the subcontractor’s business associate agreement.”

It is unclear whether AMCA failed to take reasonable steps to address and report the breach. The AMCA spokesperson declined to comment on this story, instead sending a link to the company’s website. That said, breaches come in all shapes and sizes. Some are more avoidable than others. And breach response varies even more, with ever more divergent degrees of competency.

There are many enterprise-level solutions out there to minimize the risk of such catastrophic cybersecurity events, but they aren’t available to a company that doesn’t know what it doesn’t know. In this regard, knowledge of cyber risks and cyber defense are fungible assets.

The bottom line tells the tale best. AMCA needed to file for bankruptcy protection. While I am not in a position to say exactly why this was the case, last year’s average per record cost, according to IBM’s “2018 Cost of Data Breach Study” was $157, with the average total cost to a company coming in at $4.24 million.

In other words, getting cyber wrong can represent an extinction-level event for many organizations.

The Anatomy of Liability

The AMCA breach was discovered by Gemini Advisory analysts at the end of February 2019. A database described “USA/DOB/SSN” had been posted for purchase on the dark web. On March 1, Gemini Advisory attempted to notify AMCA, and received no response. Multiple phone messages were left regarding the breach. Still, there was no response. Gemini Advisory then notified law enforcement. AMCA did not disable their payment portal until April 8.

The AMCA breach is not an isolated incident for third-party vendors in the healthcare industry. According to a recent report cited by a letter from Sen. Mark Warner (D-Va.) to Quest Diagnostics, 20 percent of data breaches in the healthcare sector in 2018 were traced to third-party vendors. Additionally, about 56 percent of provider organizations have experienced a third-party breach.

It would follow here that the vetting process a company implements in selecting third party vendors would be fully evolved by now with industry standard approaches to cybersecurity and a host of other concerns and considerations. Sadly, many companies do not have specific policies regarding the cybersecurity requirements of subcontracted entities, much less an established path to approval that assures best cyber practices are understood and practiced throughout an organization’s data ecosystem.

When it comes to debt collection, there seems to be a more pervasive lack of standards. The debt collection industry’s lobbying organization–the Association of Credit and Collection Professionals, or ACA International–offers no services or outreach that resemble an information sharing and analysis center, or ISAC. According to the ACA representative I contacted, the ACA is not in the practice of collecting, analyzing or sharing cyber threat information. They mostly seem to lobby for an impediment-free legislative environment.

Meanwhile, the ISAC-free environment matters because hackers thrive in a low-information environment. The same or similar attack is much easier to perpetrate on multiple debt collection agencies if they have no idea there’s a threat out there. Knowing what to look for, and/or being prepared for the attack du jour is among the most powerful cyber tools. While ACA International does provide compliance guidelines as well as two opt-in data security and privacy programs in their ongoing educational seminars, it’s all passive. No one has to do anything. Cybersecurity is not a spectator sport. It is an ongoing activity that must evolve as urgently and persistently as the threats it addresses.

Vetting, Adulting: Take Your Pick

It’s time to grow up. With the lack of specific federal regulations on the cybersecurity practices of third-party vendors, the companies that subcontract with them have to self-police and develop effective vetting processes. When asked if they vetted third-party vendors–or the companies they in turn subcontract–Quest Diagnostics declined to provide me with an answer. The LabCorp response to my questions on this score were similarly unilluminating.

It should go without saying that data breaches and compromises caused by third-party subcontractors and business associates are not unique to the healthcare sector. U.S. Customs and Border Protection officials issued a statement on Monday that photos of traveler’s faces and license plates had been compromised due to a “malicious cyberattack.” The data breach originated from a subcontractor network.

The prevalence of data breaches that originate from third-parties has long been an open secret, and lawmakers are increasingly demanding answers. Sens. Robert Menendez (D-N.J.), Cory Booker (D-N.J) and Mark R. Warner (D-Va.) sent letters asking the testing labs what they did to vet the security measures of AMCA, and inquiring how the breach went unnoticed for so long. They also asked what cybersecurity measures they had at the time, and if all affected parties had been reported. Fair questions all.

If you need a more institutional take, Moody’s Investor Service designated the AMCA breach a credit negative for both Quest Diagnostics and LabCorp, and predicted the breach could result in “new regulations and requirements” regarding how U.S. companies evaluate their vendors before selecting them. We can hope.

The AMCA breach is merely the latest manifestation of the perils of hiring a third-party subcontractor insufficiently cyber-safe for this or that assignment. The lab testing companies may have had cybersecurity best practices in place, but they were only as secure as their least-protected third-party vendor. The frequency of data breaches is drastically rising, and companies that fail to operate within a cybersecurity framework when hiring third-party business associates may well find themselves on the bankruptcy-side of a catastrophic breach.

To manifest the wisdom of Yogi Berra, the only solution here is to have a solution. If you don’t have one, it’s time to find one, or practice your vetting skills on hiring a third party to help you get your cyber game where it needs to be to survive the third certainty in life: Breach happens. Survival is a skill.

The post Compromise by Proxy? Why You Should Be Losing Sleep Tonight appeared first on Adam Levin.

Smashing Security #135: Zombie grannies and unintended leaks

We take a bloodied baseball bat to Android malware, and debate the merits of a social media strike, as one of the team bites the bullet and buys a smart lock for the office.

All this and much more is discussed in the latest edition of the award-winning “Smashing Security” podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by Oli Skertchly.

Introducing Veracode’s New Analytics Capabilities

If we have data, let's look at data. If all we have are opinions, let's go with mine." -- Jim Barksdale

The ability to report on your application security program depends on access to your AppSec data. For questions from “how can I help my board understand our current risk posture?” to “which teams are developing secure code, and which need additional AppSec training?” – data is the key. Nobody should guess when it comes to answering questions as important as “are we compliant?”

We have recently transformed how Veracode helps you answer questions regarding your AppSec program. By building an entirely new back-end infrastructure and using a cutting-edge analytics tool, we’ve been able to give our customers new insights into our data and provide a hub for AppSec analytics, right within the Veracode platform.

The goals of our new analytics

As part of our analytics re-design, we needed to support two use cases: (1) our customers’ analytics needs in the platform using the embedded BI tool, and (2) our internal sales and services staff’s analytics needs using a standalone BI tool. To provide this functionality, we needed a tool that could support our security model and multi-tenancy while providing excellent performance, scalability, flexibility, and support in a Veracode-hosted environment. 

Behind the scenes of the analytics overhaul

Our new infrastructure design started with moving data into the AWS cloud, and replicating scan, findings, and organizational data into an AWS Redshift database. This database drives our existing reporting capabilities both in the platform and in any custom reports we create, and the performance has been outstanding. From there, we use SQL transformations to take our operational schema and map the data into a “star schema.” The star schema is a simple schema designed to avoid multiple joins in queries, which thereby optimizes performance in analytics queries. We replicate this star schema into our smaller back-end Redshift databases, which feed data into two BI instances. 

 

 

Our standalone BI instance, used internally at Veracode to understand our customers’ AppSec portfolios, was developed first. Internal Analytics contains scans and findings data, as well as operational and low-level data that wouldn’t be of interest to a customer but is useful for our internal analytics. Once data was available in Internal Analytics, we set up meetings with SMEs across Veracode to build standardized dashboards that would provide value across Veracode. We defined the set of questions that Veracoders wanted to answer for their customers, e.g., “are we compliant?”, “how long do our scans take?”, “how frequently do we scan?”, and built dashboards that answered those questions. This resulted in nine shared dashboards and four “explores,” which provided our internal users with the ability to answer the bulk of their questions about Veracode’s customer base. The shared dashboards provide a good starting point for our users, and then if they wish to explore data from scratch they can start at one of the pre-defined “explore” levels: Applications, Scans, Findings, or Users.

Once Internal Analytics was generally available, we turned our sights on our external BI instance, the Veracode Analytics feature, which contains a subset of the data available in Internal Analytics. We realized early on that our internal and external customers have many of the same questions, and we were able to reuse eight of the nine dashboards in our new analytics offering, which you can see under Veracode Dashboards in the Analytics section of the Veracode platform. The four predefined explores are also available in the platform for new analyses.

 

 

We have designed these solutions so that the same built-in dashboards, base data, and data models are used in both internal and external analytics, which reduces both the development time and testing required for these shared dashboards. By creating automated test suites and a well-defined automated CI/CD pipeline for our dashboards and data models, we can ensure that new development in our analytics environment will not break the existing analytics. 

Security policy

Security is job #1 (and #2, and #3, and so on) at Veracode, so we hold ourselves to exceptionally high standards when it comes to security. This means we encrypt data both in transit and at rest, and focus on data security at all levels. Additionally, we “practice what we preach” here at Veracode, meaning all of our data engineering and analytics code is scanned using Veracode, and our strict policies mean we continually monitor our own policy compliance. 

Take advantage of our new capabilities

Now that the Veracode Analytics solution is live and available to all customers, I encourage you to use it to understand your AppSec risk posture. Start with our built-in dashboards, and then play around. With each tile in a dashboard, you can right click in the top left corner and choose “explore from here” to change the visualization. Data is power, and we are happy to be putting the power of AppSec data in our customers’ hands!

Is Your Smart Home Secure? 5 Tips to Help You Connect Confidently

With so many smart home devices being used today, it’s no surprise that users would want a tool to help them manage this technology. That’s where Orvibo comes in. This smart home platform helps users manage their smart appliances such as security cameras, smart lightbulbs, thermostats, and more. Unfortunately, the company left an Elasticsearch server online without a password, exposing billions of user records.

The database was found in mid-June, meaning it’s been exposed to the internet for two weeks. The database appears to have cycled through at least two billion log entries, each containing data about Orvibo SmartMate customers. This data includes customer email addresses, the IP address of the smart home devices, Orvibo usernames, and hashed passwords.

 

More IoT devices are being created every day and we as users are eager to bring them into our homes. However, device manufacturers need to make sure that they are creating these devices with at least the basic amount of security protection so users can feel confident utilizing them. Likewise, it’s important for users to remember what risks are associated with these internet-connected devices if they don’t practice proper cybersecurity hygiene. Taking the time to properly secure your devices can mean the difference between a cybercriminal accessing your home network or not. Check out these tips to help you remain secure when using your IoT devices:

  • Research before you buy. Although you might be eager to get the latest device, some are made more secure than others. Look for devices that make it easy to disable unnecessary features, update software, or change default passwords. If you already have an older device that lacks these features, consider upgrading.
  • Safeguard your devices. Before you connect a new IoT device to your network, be sure to change the default username and password to something strong and unique. Hackers often know the default settings of various IoT devices and share them online for others to expose. Turn off other manufacturer settings that don’t benefit you, like remote access, which could be used by cybercriminals to access your system.
  • Update, update, update. Make sure that your device software is always up-to-date. This will ensure that you’re protected from any known vulnerabilities. For some devices, you can even turn on automatic updates to ensure that you always have the latest software patches installed.
  • Secure your network. Just as it’s important to secure your actual device, it’s also important to secure the network it’s connected to. Help secure your router by changing its default name and password and checking that it’s using an encryption method to keep communications secure. You can also look for home network routers or gateways that come embedded with security software like McAfee Secure Home Platform.
  • Use a comprehensive security solution. Use a solution like McAfee Total Protection to help safeguard your devices and data from known vulnerabilities and emerging threats.

And, as always, to stay updated on all of the latest consumer and mobile security threats, follow @McAfee_Home  on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.

The post Is Your Smart Home Secure? 5 Tips to Help You Connect Confidently appeared first on McAfee Blogs.

Episode 528 – Things To Thnk About Before Using A Password Manager

Passwords managers can provide increased security and management capabilities around your accounts and passwords. This episode talks about a great article I came across about things to think about before using a password manager. Source article by Stuart Schechter. Be aware, be safe. Become A Patron! Patreon Page *** Support the podcast with a cup […]

The post Episode 528 – Things To Thnk About Before Using A Password Manager appeared first on Security In Five.

Getting Started with Cloud Governance

Governing cloud security and privacy in the enterprise is hard, but it’s also critical: As recently noted in a blog by Cloud Transformation Specialist Brooke Noelke, security and complexity remain the two most significant obstacles to achieving enterprise cloud goals. Accelerating cloud purchases and tying them together without critical governance has resulted in many of today’s enterprise security executives losing sleep, as minimally secured cloud provider estates run production workloads, and organizations only begin to tackle outstanding SaaS (Software as a Service) footprints.

For security professionals and leaders, the on-premise (or co-location) data center seems simple by comparison: Want to protect applications in the data center? By virtue of the fact that it has a network connection in the data center, there are certain boundaries and processes that already apply. Business unit leaders aren’t exactly standing by with a credit card, trying to load tens of thousands of dollars of 4U Servers, storage racks, and a couple of SAN heads and then trying to expense it. In other words, for a workload in the data center, certain procurement controls must be completed, an IT review established, and implementation steps forced before the servers “light up”—and networking gates must be established for connectivity and publishing.

When it comes to the cloud, however, we’re being asked to fulfill new roles, while continuing to serve as protector of all the organization’s infrastructure, both new and existing. Be the rule setter. Contribute to development practice. Be the enforcer. And do all of this while at the same time making sure all the other projects you already had planned for the next 18 months get accomplished, as well …

Without appropriate controls and expectation-setting, development teams could use a credit card and publish a pre-built workload—from registration to world-accessibility—in hours! Sadly, that’s the reality at many organizations today, in a world where as much as 11% of a company’s published sensitive data is likely to be present in custom/engineered cloud applications.

Simplify Governance – Be Transparent

One of the biggest challenges for today’s businesses is understanding what the “sanctioned” path to cloud looks like: Who do they reach out to? Why should they engage the security team and other IT partners when the software vendor is willing to take credit cards directly? At many of today’s enterprises, “Security Awareness” initiatives mean some emails and a couple training sessions a year on “building block” security measures, with a particular focus on detecting phishing emails. While these measures have their place, security teams should also establish regular partnership meetings at the business unit level to “advertise” available services to “accelerate” capabilities into the cloud.

However, instead of communicating what the business will receive or explaining the steps the security team requires in order to complete the process, the emphasis should be on what departments receive by engaging the security team early: Faster funding and procurement approvals. Proactive scheduling of scarce resources for application review. Accelerated provisioning. And ultimately, faster spend and change times, with less risk and hopefully with minimal schedule impact.

The security team also needs to help the business understand that, while they may not see it reflected in direct line items today, there is a cost per application that they are generating for existing/legacy applications. If the perception is that today’s applications are “free,” but the team needs a line item to be created in new projects for cloud security deployments, it encourages people to exit the process or to avoid things that add to the price—or, at least, to fight an internal battle to push back on each line-item add. Our job is to help the organization understand that today’s security spend is around 7% of infrastructure or application spend, and to set the expectation that whatever the next-generation project budget is, an associated investment should be expected—in both technology and people—to secure the platform.

Establish a Goal and Discuss It

Does your business understand what the “goal line” looks like when it comes to putting something into the cloud? Would they know where to go to find the diagram(s) or list(s) that define that? What level of cloud competency and security understanding does someone in the business need in order to consume what your team has published?

If the answer to one or more of these questions is a shrug—or demands a master’s level understanding of technical knowledge—how can we as the leaders of the security space expect the business to readily partner with us in a process they don’t understand?

Published policy with accompanying detailed standards is a start. But the security team has an opportunity to go a step further with very basic conceptual “block” diagrams, which set “minimum viable protection” that the business’ “minimum viable product” must have to go into security.

The easiest way to do this is to take a minimum control set, and then create a few versions of the diagram—in other words, one for the smallest footprint and one or more at larger scale—to explain to the organization how the requirements “flex” according to the size and traffic volume of what has been deployed.

Cloud Governance is Possible

Governance is the initial building block for cloud security. Being successful in protecting cloud applications requires effective technical controls, like MVISION Cloud’s product risk assessment and protection for enterprise data through unified policy. For the organization to mature and further reduce risk, governance must become as much about consulting with businesses regarding cloud consumption as it has been historically about risk meetings and change reviews. With a few simple adjustments and intentional internal marketing investments, your team can start the journey.

The post Getting Started with Cloud Governance appeared first on McAfee Blogs.

How to Use Social Engineering Penetration Tests to Protect Against Phishing Attacks

Undefined

As long as you have an email address, you will forever be sent phishing emails attempting to lure you into some malicious activity. While we’re all familiar with the concept of these emails, it’s another thing entirely when it comes to designing one. Pen testers are given just such a task when they are charged with simulating a phishing campaign for an organization.

These campaigns are designed to give an organization data on how vulnerable they are to such attacks and serve as educational opportunities to teach employees about ways to recognize and avoid getting phished. Such campaigns can be the difference between a company that suffers a huge breach, and one that remains secure. With such high stakes, it’s important for pen testers to carefully craft their phish, just as a fly fisher carefully crafts each fly. Read on for key strategies pen testers keep in mind before deploying a social engineering campaign.

Think like an attacker.

In order to simulate a phishing attack, you have to keep the goals of a threat actor in mind. Phishing is typically used for one of two purposes. First, they may be trying to get malicious code past the perimeter. A target would click a link or attempt to open an attachment in an email, releasing malware into the entire organization. This malware could be used for any number of reasons, like creating a backdoor that the threat actor can then use to access the network.

Phishing is also used to convince a user to share their credentials, which can then be used for further attacks. This may be achieved by redirecting a user to a website that is designed to imitate a legitimate site that requires a login.

Design your phish to fit an attacker’s desired outcome. If the goal is to release a malicious payload, you may only need to entice a user to click on a link to a potentially interesting news article. On the other hand, if you need a login, you would want an email that imitates a service that you know they use.  

Have a few obvious phish.

Many people still associate phishing with the early days of email, which were fairly easy to spot, with email addresses like jsmith@fakebusiness.com and vague, misspelled subject lines like “Pleeze Opne.” These days, phishing is usually much more sophisticated, with junk filters catching most of the obvious culprits. That said, some recognizable phish do still sneak through, so a campaign should include some of these easy-to-spot phishes. Having some easy wins along with progressively more challenging options helps to show the full spectrum of phishing. Additionally, if people open such transparent phish, it may show that some users aren’t paying any attention to what they’re opening.

Use phish that are active in the wild.

Sometimes you may not need to look any further than your own inbox to find phish to use in your next campaign. If any have been able to get past your spam filter, or even fooled you upon first glance, it may be a viable candidate to use in a campaign. However, it’s important to ensure that you’re only using an imitation of these real phish. That way you can be sure to strip any actual harmful files or links from these emails before sending them. 

Additionally, take the time to study active campaigns using sources like PhishTank to find the latest fish that are currently circulating around the web. Even news stories about phishing attacks can be used as inspiration for creating a phish.

Not only will using wild phish provide valuable data, users who were susceptible to the test version of it during the campaign will now be on the alert. If the real version actually does arrive in their inboxes once the campaign is over, users will think twice before clicking.

Create customized phish.

The more specific a phish is, the more likely it is to be opened. Doing research using open source intelligence resources like the white pages, social media, etc. is critical prep work before launching a phishing campaign. Personalize phish in any way that you can – names, addresses, location, interests, etc. The more specific you can be, the less a user takes time to scrutinize. Simulating a business you know someone uses is far less likely to garner suspicion than an email from a bank they don’t belong to.   

Have a variety of phish.

A social engineering penetration test should simulate a real-world situation as much as possible. The best way to do this is to have phish of every level – obvious phish, generic but well-constructed phish, and highly custom bespoke phish. These phish should also have variety in terms of their content – some should attempt to draw users towards a malicious site, others should be intended to get someone to open a link. Some should imitate internal coworkers; others should imitate external companies unrelated to the business. This will provide an organization with the best data in terms of how susceptible their employees are, and what they need to work on.

You aren’t limited to email.

While some organizations may focus entirely on email-based pen testing, it’s good to keep in mind that phishing can be done with other forms of communication. Voice phishing can be used to acquire important pin numbers, for example. Text messages are also becoming increasingly popular, and can be particularly dangerous when used on a company issued cell phone, or even a personal device that is connected to the organization’s network. 

Keep phishing.

Take the time to keep up with the latest techniques and think creatively on different methods. Ensure that you’re using tools that help you get the most out of these tests, like Core Impact. Doing post campaign analysis with metrics like click rates, login numbers, and flagging instances will help show what an organization needs to work on. Additionally, these reports will become even more valuable to show progress after regular retesting.

Ultimately, the most important part of social engineering tests like phishing campaigns is to not rest on your laurels. Since attackers are constantly retooling and trying different tactics, pen testers must do so as well.

 

cs-best-practices-phishing-700x350.jpg

Phishing best practices
Penetration testing
Big text: 
Blog
Resource type: 
Blogs
Ready to build a phishing campaign?

See how Core Impact can help you create a full scope of penetration tests with a live demo from one of our experts.

Protect sensitive information with Seqrite Encryption

Estimated reading time: 3 minutes

Among the most important assets that an enterprise possesses, data is undoubtedly the most important. In today’s digital age, there are reams of data being processed, transmitted and disseminated every millisecond and much of the world’s economy runs on data. Hence, organizations must take every possible measure when it comes to safeguarding this precious data. Data encryption is one such method which ensures the protection of company’s sensitive information from malicious parties.

Seqrite Encryption Manager is an advanced solution that protects corporate data that resides on endpoints with strong encryption algorithms such as AES, RC6, SERPENT and TWOFISH. It provides a powerful solution to problems like unauthorized access or protecting private data by maximizing data protection options. Two of the most important advantages of endpoint encryption include exceptional policy administration and key management followed by highly functional remote device management.

Some of the key features which make Seqrite’s Encryption solution powerful and robust are:

Centralized Management and Control

SEM supports centralized control and management of disk volumes, recovery information and diverse encryption policies. Full disk encryption is also offered which enables organizations to be in full control of user data with the right key required to access databases due to the pre-boot authentication feature. Seqrite Encryption Manager also supports media encryption for removable devices. Rescue and deployment techniques are provided, hence minimizing the possibility of data damage during encryption.

Full Disk & Removable Media Encryption

All data on hard disk drives is protected as due to the pre-booth authentication feature, full disk encryption is loaded on the operating system. This ensures that nobody has any access to data on the computer or the drive without the right password or the keys. The removable media encryption feature renders security to USB drives and other removable devices, restricting unsolicited access to the contents, regardless of the device they are used on.

Ease of Deployment

Seqrite Encryption provides users with easy deployment and rescue functions to avoid losing encrypted data accidentally. The Remote Installation tool facilitates deployment of Seqrite Encryption Manager clients across multiple endpoints at a time and also in the form of groups. The Pre-Requisites Tool scans the system for different parameters before installing the SEM client.

Rescue Methods

All critical rescue information is stored in a secure SEM database, allowing security administrators to recover encrypted client data in case of an emergency or a forgotten password.

Secure Access of Data

Data protection is assured by SEM at rest and in motion. The encrypted files can be accessed from removable storage devices on a system where, the encryption agent is not installed, through the Traveller Tool.

Suspend Protection

The Suspend Protection feature allows administrators to temporarily suspend client protection (boot time authentication). The volumes still remain encrypted though. This makes it a useful and important feature for the management of servers that are required to function around-the-clock.

In addition, there are other important features which make Seqrite Encryption Manager a valuable tool for data protection:

  • Group Management feature allowing client computers to be managed with the help of groups and with different attributes.
  • Scheduled Backups & Upgrade allowing administrators to schedule automatic updates along with automatic backups of the database.
  • Encryption Policies to decrypt or encrypt local volumes or removable drives, with the user having the privilege to create policies and manage volumes locally.
  • Reports which administrators can generate for groups, user accounts and computers in HTML or PDF formats.

Hence, it is quite clear that Seqrite Encryption Manager offers a simple and easy-to-use encryption solution to keep data safe. At a time when data becomes easily leaked and big names have come into the news for reasons of data leakage, businesses must show they are determined about protecting the sensitive data they use. In that respect, SEM offers a one-stop solution to improve the overall security posture.

The post Protect sensitive information with Seqrite Encryption appeared first on Seqrite Blog.

US Cyber Command warns nation-state hackers are exploiting old Microsoft Outlook bug. Make sure you’re patched!

US Cyber Command has issued an alert about an unnamed foreign country’s attempt to spread malware through the exploitation of a vulnerability in Microsoft Outlook, as concerns are raised of a rise in an Iranian-backed hacking group’s activities.

Read more in my article on the Hot for Security blog.

Senate Passes Bill to Help Defend U.S. Energy Grid against Digital Attacks

The United States Senate has passed a bill to help strengthen the defenses of the U.S. energy grid against digital attacks. On 27 June, the Senate passed the Securing Energy Infrastructure Act. Introduced by U.S. Senators Angus King (I-Maine) and Jim Risch (R-Idaho), the main purpose of the bipartisan bill is to remove security vulnerabilities […]… Read More

The post Senate Passes Bill to Help Defend U.S. Energy Grid against Digital Attacks appeared first on The State of Security.

Data deletion: Your data strategy’s greatest defense

After exposing personal information of more than 650,000 customers, pub chain Wetherspoon decided to delete almost all the customer information it had been storing to reduce risk. After all, the data you don’t have doesn’t need to be checked for compliance, disclosed in a GDPR subject access request or apologized for after a data breach.

In fact, data can be so toxic that Joshua de Larios-Heiman, chair of the California Lawyers Association Internet & Privacy Law Committee, suggests thinking of it as uranium rather than oil. “What happens to spent uranium rods? They become toxic assets and getting rid of them is really difficult. People will sue you if you dispose of them negligently,” he says.

To read this article in full, please click here

(Insider Story)

75% of organisations have been hit by spear phishing

Phishing scams aren’t as compelling as some of the more sophisticated attacks that you read about. But their prosaic nature is part of what makes them so concerning.

After all, every unusual email you receive could be a phishing scam, whether it’s an account reset message from Amazon or a work request from your boss.

And evidence shows that attacks like this will happen regularly and in incredibly convincing ways. For example, Proofpoint’s Understanding Email Fraud Survey has found that 75% of organisations had been hit by at least one spear phishing email in 2018.

Spear phishing is a specific type of phishing attack in which criminals tailor their scams to a specific person. They do this by researching the target online ­– often using information from social media – and by imitating a familiar email address.

For example, if the target works at ‘Company X’, the attacker might register the domain ‘connpanyx’ (that’s c-o-n-n-p-a-n-y-x rather than c-o-m-p-a-n-y-x), hoping that the recipient won’t spot the difference.

You might think that would be easy enough to notice, but scammers are adept at hiding the signs of their scams.

Sustained threat of spear phishing

Proofpoint’s report found that 41% of organisations suffered multiple attacks in a two-year span, suggesting that those that fell victim once were likely to do so again.

It also found that only 40% of organisations have full visibility into email threats, meaning those organisations are being targeted regularly and simply aren’t aware of the scale of the threat.


See also:


Commenting on the report, Robert Holmes, vice president of email security products at Proofpoint, said:

“Email fraud is highly pervasive and deceptively simple; hackers don’t need to include attachments or URLs, emails are distributed in fewer volumes, and typically impersonate people in authority for maximum impact.

“These and other factors make email fraud, also known as business email compromise (BEC), extremely difficult to detect and stop with traditional security tools. Our research underscores that organizations and boardrooms have a duty to equip the entire workforce with the necessary solutions and training to protect everyone against this growing threat.”

Phishing is a top concern

Clearswift’s Cyber Threatscape report also highlights the threat of phishing. The information security organisation polled 600 decision makers and 1,200 employees in the UK, US, Germany and Australia, and found that 59% of respondents said phishing was their biggest concern.

Phishing was the number one risk in all four regions, beating out the threat of employees’ lax attitudes (33%), the vulnerability of removable devices (31%) and failure to remove login access from ex-employees (28%).

According Dr Guy Bunker, senior vice president of products at Clearswift, this report “highlights that businesses need to change the way they’re approaching the task of mitigating these risks”.

“The approach should be two-fold, focused on balancing education with a robust technological safety net. This will ultimately help ensure the business stays safe,” he adds.

How can you prevent phishing attacks?

There are several ways you can address the risk of phishing. The first is to conduct staff awareness courses to educate employees on how phishing scams work and what they can do to mitigate the risk.

These courses should be repeated annually to refresh employees’ memories and maintain a workplace culture that prioritises cyber security.

You may also benefit from a thorough re-evaluation of your approach to cyber security. Our Security Awareness Programme does just that, helping you generate tangible and lasting improvements to your organisation’s security awareness.

It combines a learning needs assessment to identify the areas that your organisation should focus on, with a series of tools and services to address problems as they arise, including hands-on support from a specialist consultant, pocket guides and e-learning courses.

Find out more about our Security Awareness Programme >>


A version of this blog was originally published on 9 April 2018.

The post 75% of organisations have been hit by spear phishing appeared first on IT Governance Blog.

SOX – Not Just for Foxes and Baseball; A Sarbanes-Oxley IT Compliance Primer

There are Red Sox, White Sox, and Fox in Socks. At the turn of the century, a new SOX entered our lexicon: The Sarbanes-Oxley Act of 2002. This financial regulation was a response to large corporate misdeeds at the time, most notably Enron misleading its board through poor accounting practices and insufficient financial oversight. The […]… Read More

The post SOX – Not Just for Foxes and Baseball; A Sarbanes-Oxley IT Compliance Primer appeared first on The State of Security.

Ransomware As A Tool – LockerGoga

Ransomware authors keep experimenting with the development of payload in various dimensions. In the timeline of ransomware implementations, we have seen its evolution from a simple screen locker to multi-component model for file encryption, from novice approach to a sophisticated one. The Ransomware as a Tool has evolved in wild…

Microsoft MVP Award, Year 9

Microsoft MVP Award, Year 9

I've become especially reflective of my career this year, especially as Project Svalbard marches forward and I look back on what it's taken to get here. Especially as I have more discussions around the various turning points in my professional life, there's one that stands out above most others: my first MVP award.

This is not a path I planned, in fact when I originally got that award I referred to myself as The Accidental MVP. But I also think that's the best way to earn any of the awards I've since received; not by setting out with the award as the goal, but rather focusing on the activities for which the award is granted. I wrote a blog people found useful and I continue to do that today. The first award prompted me to start speaking publicly and obviously that's something I continue to do today too. So, before anyone asks "how do I become a Microsoft MVP", there's your answer. That and a pointer to the page on What it takes to be an MVP.

One last thing to add to that and it's the value of community encouragement. There's no way I would have stuck to this path if it wasn't for all the social media engagement, blog comments and conference selfies. It's hard to express just how what a massive role encouragement plays in keeping me motivated to do this; knowing that your work is valued is just absolutely essential and I still get a kick of seeing messages like this from just last week in Israel:

Incidentally, I'm still a Microsoft Regional Director too which runs as a parallel program. I still don't have a region, I still don't direct anything and I still don't get paid by Microsoft. Everyone with me? Good!