Want to determine the safety of a car? Perform a crash test. One of the most common ways to test the strength of something, particularly when it comes to technology, is by putting it through a stress test. Naturally, this same principle is a critical component of cybersecurity. One of the most effective ways to try and find your security infrastructure’s weaknesses, and your security team’s ability to detect and respond to attacks, is through red team/blue team tests. Read on to find out the differences between these teams, the emergence of purple teams, and the most effective ways to utilize them.
Red team and blue team tests are named and modeled after military exercises. In order to ensure soldiers are battle ready, simulations are run to test out the effectiveness of their defense strategies. In these simulations, red teams take on the offensive role of the enemy, while the blue team is on the defensive, shielding their position. In the cybersecurity realm, the roles are the same, but the battlefield is in the digital sphere.
What is a Red Team?
Red teams are designed to think like attackers, and are brought on specifically to put the organization’s cybersecurity posture to the test, utilizing multiple strategies in order to breach defenses. Some of these approaches include vulnerability assessments, penetration tests, or even social engineering attacks like phishing. Red teams use a variety of tools, such as pen testing solutions like Core Impact, to create the most effective simulation they can.
Though key parties may be informed that a red team campaign is taking place, most employees, including the organization’s IT team, won’t be notified until after the fact, making it as authentic as possible.
Red teams can be internal, which helps set up long term goals and ensures frequent testing. Oftentimes, however, they are hired from an external firm. Having an outside team, like Security Consulting Services, come in can also be ideal since they provide a fresh pair of expert eyes, often seeing vulnerabilities that internal security personnel may miss, simply because internal teams have such frequent exposure to the environment they’re testing.
What is a Blue Team?
Blue teams are in charge of building up an organization’s protective measures, and taking action when needed. This is done in a variety of ways. Regular system hardening procedures include updates, patching, eliminating unused software or features, or changing passwords. Additionally, new security tools can be deployed, like SIEM solutions that help blue teams monitor data logs from different assets for security alerts.
What is a Purple Team?
More recently, the idea of a purple team has become the latest buzzword in the cybersecurity world. While there is some confusion surrounding the usage and definition of the term, it’s best to focus on the ideal it is promoting. Ultimately, the concept of a purple team is the mindset of seeing and treating red and blue teams as symbiotic. It’s not red teams vs. blue teams, but rather one large team focusing on the one overarching goal: improving security. The key to becoming a purple team comes down to communication.
One of the purposes of a red team is to act as a training function for the blue team. Infiltrating and testing the environment is only part of the job. Measuring and improving the ability to detect and respond to attacks is a key part of living up to the ideal of being a purple team. Red teams must prioritize documentation and education efforts so that blue teams can take appropriate action towards remediation and build up resiliency.
Blue teams, in turn, should view the findings of a red team as a guide for where to focus their efforts, and as a roadmap to find vulnerabilities before the next exercise. In a perfect scenario, red teams wouldn’t find the same vulnerability twice.
Best Practices, No Matter the Color
Operating like a purple team is simply adhering to best practices in order to create an environment that is a stronghold against cyber-attacks. As mentioned above, communication between teams is the most critical element in this, but here are a few other ways to get the most out your red team and blue team exercises:
Have a plan of action.
The planning stages of simulation exercises are just as important as the exercises themselves. There are endless scenarios and methodologies to use when attempting to exploit a system, so it’s vital to limit your scope. Red teams should have set objectives and measurable goals that will provide helpful data for blue teams to analyze. Blue teams should use this data to create their own objectives and goals for remediation.
Always follow up.
While it’s tempting to simply move on to the next task, it’s critical to follow up after every exercise. Retrospectives are a great way for teams to learn from one another and can shed further light on patching and preventing weaknesses. Additionally, fixes themselves must also be verified, so following up with retesting efforts is crucial.
Think outside the box.
Threat actors aren’t following a set of rules when they break into a system. Red teamers can stay within the scope of the exercise while still having the freedom to be equally creative. However, remember to show your work – blue teams can only prevent an attack if they can understand how it was done.
Never stop learning.
Promote a culture of learning and encourage both red and blue teams to stay up to date on the latest tools and tricks to prevent being caught off guard. Hackers are always evolving, and true purple teams evolve right along with them.
Equip your red team with a comprehensive pen testing solution that can safely exploit vulnerabilities. Get a live demo of Core Impact today.
McAfee Advanced Threat Research recently released a blog detailing a vulnerability in the Mr. Coffee Coffee Maker with WeMo. Please refer to the earlier blog to catch up with the processes and techniques I used to investigate and ultimately compromise this smart coffee maker. While researching the device, there was always one attack vector that I had wanted to revisit. It was during the writing of that blog that I was finally able to circle back to it. As it turns out, my intuition was accurate; the second vulnerability I found was much simpler and still allowed me to gain root access to the target.
Recapping the original vulnerability
The first vulnerability modified the “template” section of the brew schedule rule file, which a is unique file that is sent when the user schedules a brew in advance. I also needed to modify the template itself, sent from the WeMo App directly to the coffee maker. During that research I noticed that many of the other fields could be impactful but did not investigate them as thoroughly as the template field.
Figure 1: Brew schedule rule
When the user schedules a brew, an individual rule is added to the Mr. Coffee root crontab. The crontab entry uses the rule’s “id” field to make sure the correct rule is executed at the desired time.
Figure 2: Root crontab entry
Crontab allows for basic scheduling features from the OS level. The user provides both the command to execute as well as timing details down to the minute, as shown in Figure 3.
Figure 3: Crontab syntax
During the initial research, I started to fuzz the rule id field; however, because every rule name that I placed in the malicious schedule was always prepended by the “/sbin/rtng_run_rule”, I could not get anything abnormal to happen. I also noticed that a lot of characters that could be useful for command injection were being filtered.
The following is a list of characters sanitized or filtered on input.
At this point I moved on and ended up finding the template vulnerability as laid out in the previous blog.
Finding an even more simple vulnerability
A few months after disclosing to Belkin, I revisited the steps to achieve this template abuse feature, in preparation for a public disclosure blog. Having the ability to write arbitrary code directly into the root’s crontab is enticing, so I began looking into it again. I needed to find a way to terminate the “rtng_run_rule” and add my own commands to the crontab file by modifying the “id” field. The “rtng_run_rule” file is a shell script that directly calls a Lua script named “rtng_run_rule.lua”. I noticed that I could send the double pipe “||” character but the “rtng_run_rule” wrapper script would never return a failing return code. Next, I looked at the how the wrapper script is handling command line arguments as shown below.
Figure 4: rtng_run_rule wrapper script
At this point I created a new rule: “-f|| touch test”. The “-f” is not a parsed argument, meaning it will take the “Bad option” case, causing the “rtng_run_rule” wrapper script to return “-1”. With the wrapper script returning a failing return code, the “||” (or) statement is initiated, which executes “touch test” and creates an empty file named “test”. Since I still had serial access (I explain in detail in my previous blog how I achieved this) I was able to log in to the coffee maker and find where the “test” file was located. I found it in root’s home directory.
Being able to write arbitrary files and execute commands without the “/” character is still somewhat limiting, as most file paths and web URLs will need forward slashes. I needed to find a way to execute commands that had “/” characters in them. I decided to do this by downloading a file from a webserver I control and executing it in Ash to bypass file path sanitization characters.
Figure 5: Commands allowing for execution of filtered characters.
Let me break this down. The “-f” as indicated before will cause the wrapper script to execute the “||” command. Then the “wget” command will initiate a download from my web server, located at IP address “172.16.127.31.” The “-q” will force wget to only print what it receives, and the “-O -“ tells wget to print to STDOUT instead of a file. Finally, the “| ash” command grabs all the output from STDOUT and executes it as Linux shell commands.
This way I can set up a server that simply returns a file containing necessary Linux commands and host it on my local machine. When I send the rule with the above command injection it will reach out to my local server and execute everything as root. The technique of piping wget into Ash also bypasses all the character filtering so I can now execute any command I want.
Status with Vendor
Belkin did patch the original template vulnerability and released new firmware. The vulnerability explained in this blog was found on the new firmware and, as of today, we have not heard of any plans for a patch. This vulnerability was disclosed to Belkin on February 25th, 2019. In accordance with our vulnerability disclosure policy, we are releasing details of this flaw today in hopes of alerting consumers of the device of the ongoing security findings. While this bug is also within the Mr. Coffee with WeMo’s scheduling function, it is much easier for an attacker to leverage since it does not require any modifications to templates or rehashing of code changes.The following demo video shows how this vulnerability can be used to compromise other devices on the network, including a fully patched Windows 10 PC.
Key takeways for enterprises, consumers and vendors
Devices such as the Mr. Coffee Coffee Maker with WeMo serve as a good reminder of the pros and cons to “smart” IoT. While advances in automation and technology offer exciting new capabilities, they should be weighed against the potential security concerns. In a home setting, consumers should set up these types of devices on a segmented network, isolated from sensitive network traffic and more critical devices. They should implement a strong password policy to make network access more challenging and apply patches or updates for all networked devices whenever available. Enterprises should restrict access to devices such as these in corporate environments or, at a minimum, provide a policy for oversight and management. They should be treated just the same as any other asset on the network, as IoT devices are often unmonitored pivot points into more critical network infrastructure. Network scanning and vulnerability assessments should be performed, in conjunction with a rigorous patching cycle for known issues. While the vendor has not provided a CVE for this vulnerability, we calculated a CVSS score of 9.1 out of 10. This score would categorize this as a critical vulnerability.Finally, as consumers of these products, we need to ask more of the vendors and manufacturers. A better understanding of secure coding and vulnerability assessment is critical, before products go to market. Vendors who implement a vulnerability reporting program and respond quickly can gain consumers’ trust and ensure product reputation is undamaged. One goal of the McAfee Advanced Threat Research team is to identify and illuminate a broad spectrum of threats in today’s complex and constantly evolving landscape. Through analysis and responsible disclosure, we aim to guide product manufacturers toward a more comprehensive security posture.
This year, Canada, multiple European nations, and others will host high profile elections. The topic of cyber-enabled threats disrupting and targeting elections has become an increasing area of awareness for governments and citizens globally. To develop solutions and security programs to counter cyber threats to elections, it is important to begin with properly categorizing the threat. In this post, we’ll explore the various threats to elections FireEye has observed and provide a framework for organizations to sort these activities.
The Election Ecosystem: Targets
Historically, FireEye has observed targeting of a wide range of organizations connected to elections. In considering their role and criticality to the process of elections, these various entities can be grouped into three categories: core election infrastructure, supporting organizations involved in the administration of elections, and other groups that have a participatory role in the electoral process. All of these entities may be targeted for a variety of reasons to influence or collect intelligence on the electoral process and participants.
FireEye is aware of only limited indications of entities targeted in the first category (light blue area). Although we have not observed direct evidence that actors have manipulated the electoral process in any major national or regional election by infiltrating the systems or hardware used to record or tally votes, the sheer complexity of these systems prevents us from categorically stating that these systems have not been successfully compromised.
Moving outward into the gray section of the diagram, entities that fall into this category include organizations involved in the administration of elections. While these organizations may maintain networks separate from voting systems and tabulation platforms, they play important roles in overseeing and communicating results to the public. FireEye has witnessed breaches into a variety of these organizations, in some cases for the purpose of collecting intelligence or in others to coopt and display false information on publicly-facing systems as part of an influence campaign.
Lastly, FireEye has observed targeting of organizations that are involved in election campaigns and news coverage. Tactics we have witnessed include disinformation campaigns on adversary-maintained infrastructure and social media platforms. For example, in August 2017, we observed several inauthentic news websites created to mimic legitimate local and international media organizations ahead of a sub-Saharan African nation’s presidential election. A subset of the counterfeit domains appears to have been created in coordination with each other, if not by the same actor, to damage the reputation of the presidential nominee for the opposition party.
The Threat Activity
To counter and mitigate risks to elections, properly categorizing the specific activity and intent is important. While terms like “election interference” are often used to describe all of the threats in this space, some of the malicious activity FireEye has witnessed may fall outside this definition. Broadly speaking most election-related threats can be thought of in four categories: social-media enabled disinformation, cyber espionage, “hack and leak” campaigns, and attacks on critical election infrastructure.
- Social-Media Enabled Disinformation: This category includes the activity FireEye has tracked from the Russia-affiliated Internet Research Association (IRA) and various Iranian disinformation operations. In some cases, this has involved creating fraudulent content on controversial issues and seeking to promote it across social media platforms. In other examples, disinformation campaigns have focused on amplifying already issues that have organic interest. Some of these campaigns may also be involved in politically-motivated messaging on social media platforms prior to elections without a specific focus electoral events.
- Cyber Espionage: Nation state actors like Russia-nexus APT28 and Sandworm Team, and China-nexus APT40, have carried out cyber espionage operations against multiple types of targets in the election ecosystem. This has ranged from intrusions into everything from political campaigns to election commissions, likely for a variety of reasons. In some cases, these actors are possibly seeking to obtain information on policy stances of candidates and political parties. In other situations—particularly against election administrators or system vendors—it is possible that these intrusions are reconnaissance for further operations, seeking to understand network layouts that may allow them to move into more critical infrastructure.
- “Hack and Leak” Campaigns: Some threat actors that FireEye has observed have utilized the data they’ve gained from espionage intrusions to then leak that information with the intent of influencing public perception. In this manner, they combine the previous two categories of activity. Notably, this tactic has been employed by Guccifer 2.0 and DC Leaks in the 2016 U.S. election. In some cases, similar tactics have leveraged compromised infrastructure to carry out disinformation operations, such as in the 2014 Ukrainian presidential campaign in which Russian-nexus actors posted erroneous election results from the compromised Ukrainian election commission website.
- Attacks on Critical Election Infrastructure : Compromises into core critical infrastructure such as election management systems, voting systems, electronic pollbooks, and others represent the most critical risks to elections, with the potential to alter or delete votes or voters from voter rolls. Though this is an often-discussed risk, there is limited evidence of intrusion activity targeting core election infrastructure.
Of the activity described here, FireEye has observed a full spectrum of campaigns by Russian-nexus actors, from carrying out intrusions into organizations and stealing data, leaking that data through online personas and fronts, as well as targeting of election infrastructure. From limited observations, China has for the most part focused solely on cyber espionage operations, as in the case of activity FireEye reported on in the targeting the 2018 Cambodian election. From various motivations, FireEye has also witnessed limited evidence of activity from hacktivists and criminal entities in targeting parts of the election ecosystem.
While there is increasing global awareness of threats to elections, election administrators and others continue to face challenges in ensuring the integrity of the vote. To properly counter threats to elections, individuals and organizations involved in the electoral process should:
- Learn the Playbook of the Adversary: Proactive organizations can learn from the activity of threat actors uncovered in other elections and implement security controls that adapt to new tools and TTPs. Political campaigns and others should also educate staff and contractors on common spear-phishing tactics used by some of the primary APT groups.
- Incorporate Threat Intelligence for Context: Operationally, security organizations can utilize threat intelligence to better differentiate and triage the most important alerts from untargeted commodity malware activity.
- Anticipate External Threats: Beyond the internal networks of county governments and political campaigns, election administrators and risk management professionals involved in elections should prepare plans for dealing with leaked and compromised data, understanding how threat actors may utilize this for disinformation campaigns.
By: Anshu, Software Engineer
“The mind is not a vessel that needs filling, but wood that needs igniting.”—Mestrius Plutarchus
A mentor isn’t someone who answers your questions, but someone who helps you ask the right ones. After joining the McAfee WISE mentorship program as a mentee, I understood the essence of these words.
WISE is a community committed to providing opportunities for growth and success, increasing engagement, and empowering women at McAfee. Each year, WISE helps women network and find opportunities for their career development.
Joining the McAfee WISE Mentorship Program
The WISE Mentorship Program was introduced to address how women have been underrepresented in the tech sector, especially in cybersecurity. It’s believed that mentoring can address and improve job satisfaction and retention, which is how the program found its way to India and I learned about it. As an employee at McAfee for over five years, I had the opportunity to learn a lot of new things, but networking was a skillset I needed to hone. I thought this might be my chance to develop my skills, so I enrolled as a mentee.
I was partnered with “Chandramouli” also known as “Mouli” who happened to be the executive sponsor for the WISE India Chapter, as well as one of our IT leaders.
The Mentor-Mentee Relationship
My sessions with Mouli were informal conversations rather than formal sync-ups. We not only discussed the industry and women in tech—but also our personal stories, the books we read and are inspired by. We discovered a common love for badminton, so we started sharing analogies of how we would handle situations at work compared to game and life scenarios.
And the lessons learned were humbling. You win, you lose, you conquer. This thought shifted my perspective to think about how I would react if it was a badminton match. Would I accept defeat even if the opponent was on game point? Would I play differently even if I knew the match was lost? I realized I would fight and fiercely compete. This simple shift started to make me think on my toes daily.
Like many people, I had a fair idea of how I wanted my career to shape up, but with the help of a mentor, I began to steer faster toward my goal. In just one session, we were able to identify areas that were slowing down my development.
Developing My Skills
We noticed that networking was one of my key improvement areas, so we decided to tackle this with baby steps. He assigned small but achievable tasks to me—tasks as simple as creating a LinkedIn profile and connecting with former and current co-workers.
What happened after that was truly amazing. People from all walks of life in the industry, from my school, college, and more, started connecting with me, and it was then when I realized I had made an impression. Now I find it easier to initiate conversations, knowing that people are ready to help and talk about things we mutually love. As small as these strides might be, they helped me not just move ahead, but also provided me with measurable momentum.
Being able to discuss and question the status quo and engage with someone who is more experienced, knows the art of the game, and is a fierce champion for WISE is something I look forward to every month. Thanks to McAfee for giving each one of us this opportunity to help further our careers and to help us dream big.
Interested in joining our team? We’re hiring! Apply now.
The post How McAfee’s Mentorship Program Helped Me Shine in My Career Journey appeared first on McAfee Blogs.
Online graphic design tools are extremely useful when it comes to creating resumes, social media graphics, invitations, and other designs and documents. Unfortunately, these platforms aren’t immune to malicious online activity. Canva, a popular Australian web design service, was recently breached by a malicious hacker, resulting in 139 million user records compromised.
So, how was this breach discovered? The hacker, who goes by the name GnosticPlayers, contacted a security reporter from ZDNet on May 24th and made him aware of the situation. The hacker claims to have stolen data pertaining to 1 billion users from multiple websites. The compromised data from Canva includes names, usernames, email addresses, city, and country information.
Canva claims to securely store all user passwords using the highest standards via a Bcrypt algorithm. Bcrypt is a strong, slow password-hashing algorithm designed to be difficult and time-consuming for hackers to crack since hashing causes one-way encryption. Additionally, each Canva password was salted, meaning that random data was added to passwords to prevent revealing identical passwords used across the platform. According to ZDNet, 61 million users had their passwords encrypted with the Bcrypt algorithm, resulting in 78 million users having their Gmail addresses exposed in the breach.
Canva has notified users of the breach through email and ensured that their payment card and other financial data is safe. However, even if you aren’t a Canva user, it’s important to be aware of what cybersecurity precautions you should take in the event of a data breach. Check out the following tips:
- Change your passwords. As an added precaution, Canva is encouraging their community of users to change their email and Canva account passwords. If a cybercriminal got a hold of the exposed data, they could gain access to your other accounts if your login credentials were the same across different platforms.
- Check to see if you’ve been affected. If you’ve used Canva and believe your data might have been exposed, use this tool to check or set an alert to be notified of other potential data breaches.
- Secure your personal data. Use a security solution like McAfee Identity Theft Protection. If your information is compromised during a breach, Identity Theft Protection helps monitor and keep tabs on your data in case a cybercriminal attempts to use it.
The post Attention Graphic Designers: It’s Time to Secure Your Canva Credentials appeared first on McAfee Blogs.
Reverse engineers, forensic investigators, and incident responders have an arsenal of tools at their disposal to dissect malicious software binaries. When performing malware analysis, they successively apply these tools in order to gradually gather clues about a binary’s function, design detection methods, and ascertain how to contain its damage. One of the most useful initial steps is to inspect its printable characters via the Strings program. A binary will often contain strings if it performs operations like printing an error message, connecting to a URL, creating a registry key, or copying a file to a specific location – each of which provide crucial hints that can help drive future analysis.
Manually filtering out these relevant strings can be time consuming and error prone, especially considering that:
- Relevant strings occur disproportionately less often than irrelevant strings.
- Larger binaries can output upwards of tens of thousands of individual strings.
- The definition of "relevant” can vary significantly across individual human analysts.
Investigators would never want to miss an important clue that could have reduced their time spent performing the malware analysis, or even worse, led them to draw incomplete or incorrect conclusions. In this blog post, we will demonstrate how the FireEye Data Science (FDS) and FireEye Labs Reverse Engineering (FLARE) teams recently collaborated to streamline this analyst pain point using machine learning.
Each string returned by the Strings program is represented by sequences of 3 characters or more ending with a null terminator, independent of any surrounding context and file formatting. These loose criteria mean that Strings may identify sequences of characters as strings when they are not human-interpretable. For example, if consecutive bytes 0x31, 0x33, 0x33, 0x37, 0x00 appear within a binary, Strings will interpret this as “1337.” However, those ASCII characters may not actually represent that string per se; they could instead represent a memory address, CPU instructions, or even data utilized by the program. Strings leaves it up to the analyst to filter out such irrelevant strings that appear within its output. For instance, only a handful of the strings listed in Figure 1 that originate from an example malicious binary are relevant from a malware analyst’s point of view.
Figure 1: An example Strings output containing 44 strings for a toy sample with a SHA-256 value of eb84360ca4e33b8bb60df47ab5ce962501ef3420bc7aab90655fd507d2ffcedd.
Ranking strings in terms of descending relevance would make an analyst’s life much easier. They would then only need to focus their attention on the most relevant strings located towards the top of the list, and simply disregard everything below. However, solving the task of automatically ranking strings is not trivial. The space of relevant strings is unstructured and vast, and devising finely tuned rules to robustly account for all the possible variations among them would be a tall order.
Learning to Rank Strings Output
This task can instead be formulated in a machine learning (ML) framework called learning to rank (LTR), which has been historically applied to problems like information retrieval, machine translation, web search, and collaborative filtering. One way to tackle LTR problems is by using Gradient Boosted Decision Trees (GBDTs). GBDTs successively learn individual decision trees that reduce the loss using a gradient descent procedure, and ultimately use a weighted sum of every trees’ prediction as an ensemble. GBDTs with an LTR objective function can learn class probabilities to compute each string’s expected relevance, which can then be used to rank a given Strings output. We provide a high-level overview of how this works in Figure 2.
In the initial train() step of Figure 2, over 25 thousand binaries are run through the Strings program to generate training data consisting of over 18 million total strings. Each training sample then corresponds to the concatenated list of ASCII and Unicode strings output by the Strings program on that input file. To train the model, these raw strings are transformed into numerical vectors containing natural language processing features like Shannon entropy and character co-occurrence frequencies, together with domain-specific signals like the presence of indicators of compromise (e.g. file paths, IP addresses, URLs, etc.), format strings, imports, and other relevant landmarks.
Figure 2: The ML-based LTR framework ranks strings based on their relevance for malware analysis. This figure illustrates different steps of the machine learning modeling process: the initial train() step is denoted by solid arrows and boxes, and the subsequent predict() and sort() steps are denoted by dotted arrows and boxes.
Each transformed string’s feature vector is associated with a non-negative integer label that represents their relevance for malware analysis. Labels range from 0 to 7, with higher numbers indicating increased relevance. To generate these labels, we leverage the subject matter knowledge of FLARE analysts to apply heuristics and impose high-level constraints on the resulting label distributions. While this weak supervision approach may generate noise and spurious errors compared to an ideal case where every string is manually labeled, it also provides an inexpensive and model-agnostic way to integrate domain expertise directly into our GBDT model.
Next during the predict() step of Figure 2, we use the trained GBDT model to predict ranks for the strings belonging to an input file that was not originally part of the training data, and in this example query we use the Strings output shown in Figure 1. The model predicts ranks for each string in the query as floating-point numbers that represent expected relevance scores, and in the final sort() step of Figure 2, strings are sorted in descending order by these scores. Figure 3 illustrates how this resulting prediction achieves the desired goal of ranking strings according to their relevance for malware analysis.
Figure 3: The resulting ranking on the strings depicted in both Figure 1 and in the truncated query of Figure 2. Contrast the relative ordering of the strings shown here to those otherwise identical lists.
The predicted and sorted string rankings in Figure 3 show network-based indicators on top of the list, followed by registry paths and entries. These reveal the potential C2 server and malicious behavior on the host. The subsequent output consisting of user-related information is more likely to be benign, but still worthy of investigation. Rounding out the list are common strings like Windows API functions and PE artifacts that tend to raise no red flags for the malware analyst.
While it seems like the model qualitatively ranks the above strings as expected, we would like some quantitative way to assess the model’s performance more holistically. What evaluation criteria can we use to convince ourselves that the model generalizes beyond the coverage of our weak supervision sources, and to compare models that are trained with different parameters?
We turn to the recommender systems literature, which uses the Normalized Discounted Cumulative Gain (NDCG) score to evaluate ranking of items (i.e. individual strings) in a collection (i.e. a Strings output). NDCG sounds complicated, but let’s boil it down one letter at a time:
- “G” is for gain, which corresponds to the magnitude of each string’s relevance.
- “C” is for cumulative, which refers to the cumulative gain or summed total of every string’s relevance.
- “D” is for discounted, which divides each string’s predicted relevance by a monotonically increasing function like the logarithm of its ranked position, reflecting the goal of having the most relevant strings ranked towards the top of our predictions.
- “N” is for normalized, which means dividing DCG scores by ideal DCG scores calculated for a ground truth holdout dataset, which we obtain from FLARE-identified relevant strings contained within historical malware reports. Normalization makes it possible to compare scores across samples since the number of strings within different Strings outputs can vary widely.
Figure 4: Kernel Density Estimate of NDCG@100 scores for Strings outputs from the holdout dataset. Scores are calculated for the original ordering after simply running the Strings program on each binary (gray) versus the predicted ordering from the trained GBDT model (red).
In practice, we take the first k strings indexed by their ranks within a single Strings output, where the k parameter is chosen based on how many strings a malware analyst will attend to or deem relevant on average. For our purposes we set k = 100 based on the approximate average number of relevant strings per Strings output. NDCG@k scores are bounded between 0 and 1, with scores closer to 1 indicating better prediction quality in which more relevant strings surface towards the top. This measurement allows us to evaluate the predictions from a given model versus those generated by other models and ranked with different algorithms.
To quantitatively assess model performance, we run the strings from each sample that have ground truth FLARE reports though the predict() step of Figure 2, and compare their predicted ranks with a baseline of the original ranking of strings output by Strings. The divergence in distributions of NDCG@100 scores between these two approaches demonstrates that the trained GBDT model learns a useful structure that generalizes well to the independent holdout set (Figure 4).
In this blog post, we introduced an ML model that learns to rank strings based on their relevance for malware analysis. Our results illustrate that it can rank Strings output based both on qualitative inspection (Figure 3) and quantitative evaluation of NDCG@k (Figure 4). Since Strings is so commonly applied during malware analysis at FireEye and elsewhere, this model could significantly reduce the overall time required to investigate suspected malicious binaries at scale. We plan on continuing to improve its NDCG@k scores by training it with more high fidelity labeled data, incorporating more sophisticated modeling and featurization techniques, and soliciting further analyst feedback from field testing.
It’s well known that malware authors go through great lengths to conceal useful strings from analysts, and a potential blind spot to consider for this model is that the utility of Strings itself can be thwarted by obfuscation. However, open source tools like the FireEye Labs Obfuscated Strings Solver (FLOSS) can be used as an in-line replacement for Strings. FLOSS automatically extracts printable strings just as Strings does, but additionally reveals obfuscated strings that have been encoded, packed, or manually constructed on the stack. The model can be readily trained on FLOSS outputs to rank even obfuscated strings. Furthermore, since it can be applied to arbitrary lists of strings, the model could also be used to rank strings extracted from live memory dumps and sandbox runs.
This work represents a collaboration between the FDS and FLARE teams, which together build predictive models to help find evil and improve outcomes for FireEye’s customers and products. If you are interested in this mission, please consider joining the team by applying to one of our job openings.
I thought of this quote today as the debate rages about compromising municipalities and other information technology-constrained yet personal information-rich organizations.
Several years ago I wrote If You Can't Protect It, Don't Collect It. I argued that if you are unable to defend personal information, then you should not gather and store it.
In a similar spirit, here I argue that if you are unable to securely operate information technology that matters, then you should not be supporting that IT.
You should outsource it to a trustworthy cloud provider, and concentrate on managing secure access to those services.
If you cannot outsource it, and you remain incapable of defending it natively, then you should integrate a capable managed security provider.
It's clear to me that a large portion of those running PI-processing IT are simply not capable of doing so in secure manner, and they do not bear the full cost of PI breaches.
They have too many assets, with too many vulnerabilities, and are targeted by too many threat actors.
These organizations lack sufficient people, processes, and technologies to mitigate the risk.
They have successes, but they are generally due to the heroics of individual IT and security professionals, who often feel out-gunned by their adversaries.
If you can't patch a two-year-old vulnerability prior to exploitation, or detect an intrusion and respond to the adversary before he completes his mission, then you are demonstrating that you need to change your entire approach to information technology.
The security industry seems to think that throwing more people at the problem is the answer, yet year after year we read about several million job openings that remain unfilled. This is a sign that we need to change the way we are doing business. The fact is that those organziations that cannot defend themselves need to recognize their limitations and change their game.
I recognize that outsourcing is not a panacea. Note that I emphasized "IT" in my recommendation. I do not see how one could outsource the critical technology running on-premise in the industrial control system (ICS) world, for example. Those operations may need to rely more on outsourced security providers, if they cannot sufficiently detect and respond to intrusions using in-house capabilities.
Remember that the vast majority of organizations do not exist to run IT. They run IT to support their lines of business. Many older organizations have indeed been migrating legacy applications to the cloud, and most new organizations are cloud-native. These are hopeful signs, as the older organizations could potentially "age-out" over time.
This puts a burden on the cloud providers, who fall into the "managed service provider" category that I wrote about in my recent Corelight blog. However, the more trustworthy providers have the people, processes, and technology in place to handle their responsibilities in a more secure way than many organziations who are struggling with on-premise legacy IT.
Everyone's got to know their limitations.
When deciding how to go about protecting your company’s sensitive data, there are plenty of different solutions to choose from, such as endpoint controls, file system controls, or even network traffic inspection. However, the technology is only as effective as the people and processes in charge of configuring, managing, and monitoring it. That’s why it’s important that technology is not your only method of protecting your data, but instead a way to complement a strategy consisting of internal policies, procedures, and operations. This approach is called Data Loss Prevention (DLP), and should be implemented by every organization, regardless of size.
Why exactly should you consider a DLP strategy? Here’s four of the main reasons:
1. You have sensitive information.
You have data; every company does. That data is important to your business, your customers, and you. We frequently hear about companies experiencing a data breach and only finding out months, or even years later, that there was a breach. Take Marriott International, for example. Marriott acquired a hotel chain called Starwood in 2016. What Starwood and Marriott didn’t know at the time was that Starwood had been breached in 2014. The attacker remained in the system after Marriott and Starwood merged their systems.
It wasn’t until 2018, four years later, that the breach was discovered. If Marriott had implemented an effective DLP strategy, they could have detected and purged the breach sooner through a number of different preventive or investigative procedures.
Your data is sensitive to your business’ success and should only be handled by people that you trust: you and your employees (on a least-privilege basis).
2. Human error.
Employees can unintentionally leave sensitive data vulnerable. Whether that means they leave file systems vulnerable to unauthorized access, forget to flag an email as sensitive that contains Personally Identifiable Information (PII), or hand their coworker removable media with a list full of Social Security Numbers used for background checks, it should go without saying that humans can make mistakes.
That’s where a thorough DLP strategy can help; if a DLP solution is configured and monitoring your environment according to your policies, you can set enforced rules that prevent these mistakes and generate accurate reports and alerts. Combine this with employee security training, and you have a chance to fix the potential damage to your business before it happens.
If you lack the resources to set up those configurations, reports, and alerts, you can hire a Managed Security Service Provider (MSSP) to take care of those for you.
3. Malicious Insider threats.
Consider the following scenario:
You hire an individual and they have been performing expertly. They seem to enjoy the job and they haven’t requested a raise in years. Little did you know that when you hired them, they immediately started stealing and selling your data to the highest bidder. This is an extreme scenario, but it does happen. There are organizations and nation-states that will pay top dollar for your sensitive data, and they will gladly target your employees to do it.
Implementing a DLP strategy that includes thorough scenario training can discourage your employees from being persuaded into selling your data, as well as help catch those who do it. With properly trained employees and an effective chain of command, insiders can be reported by their peers at every possible point and be stopped before serious damage is done to your business’ reputation and resulting profits.
4. This is the 21st century of interconnectivity.
We’re connected to everything these days; it’s human nature to crave popularity, which has caused an obsession over online presence that doesn’t always take into account protecting sensitive data. When there’s so many easy ways to send, receive, and view different types of communications from so many devices, it’s easy to blur the line of what belongs on which devices.
We’ve already pointed out that humans make mistakes; why not use a well thought-out DLP strategy and implemented technology to keep an eye on your critical data and tell you when your employees do make those mistakes? Whether you implement your own monitoring team or contract with an MSSP, a DLP solution will solidify those previously blurred lines of where that data does and doesn’t belong.
In summary, you need a DLP strategy because you have sensitive data, you employ humans who can or might want to sell your data, and even the best policies and procedures can’t stop someone from unknowingly exposing your company’s data. A Data Loss Prevention Strategy written with supporting technologies in mind can mitigate those risks.
The post 4 Reasons Your Organization Needs a Data Loss Prevention Strategy appeared first on GRA Quantum.
- Digital; email, uploading to cloud services and copying to external storage (11%)
- Using steganography or encryption tools to hide exfiltration (8%)
- Printing information (11%)
- Handwriting copying information (9%)
- Photographing information (8%)
- Personal Work (19%)
- Customer Information i.e. contact details, confidential market information, sales pipeline (11%)
- Company Assets i.e. passwords to subscription services, company benefits (7%)
- Value for their future career success in their next role (12%)
- To keep a record of their work (12%)
- Benefit their career (10%)
- Financial, specifically paid to do so by an outside third party (8.5%)
When a security-related defect is found in code, it’s easy for security teams to jump to conclusions and place the blame on the developers. However, security teams need to change their approach to this issue and start understanding why there is a gap in developers’ security knowledge. Furthermore, how can we overcome that hurdle and provide our developers the tools they need to produce secure software from the start of the coding process?
Recently, Forrester’s Amy DeMartine and Trevor Lyness put together a report, “Show, Don’t Tell, Your Developers How To Write Secure Code,” to demonstrate how to use application security testing to educate developers.
Where Does the Knowledge Gap Come From?
There are a few reasons why developers have a lack of security knowledge; one significant reason is the fact that developers aren’t taught application security in school. Forrester looked at the top 40 computer science programs and found that “none of the top ranked computer science programs in the United States require a class about secure coding or secure application design.” Furthermore, general cybersecurity is offered as an option – rather than as a priority – in many schools. Only one school out of the top 40 requires a general cybersecurity course to obtain a degree in computer science.
Not only is there a lack of formal cybersecurity education, but there’s also a general unawareness about application security trends, for instance, using insecure open source components. There’s no doubt about it – open source code is a huge time-saver for developers, after all, we live in a world where time is money. Rapidly releasing software can be a huge competitive advantage for businesses across multiple industries, and open source components save a large amount of time for coders. Unfortunately, many developers don’t know that open source code is riddled with vulnerabilities that can expose the entire organization to risk. That’s where security professionals come in: they need to be working with developers to ensure they have the knowledge and resources they need to code securely.
When you consider the widespread lack of formal cybersecurity education, paired with a general unfamiliarity of application security trends, it’s no wonder that many development teams are flying blind when it comes to software security.
How Can We Help Developers?
Forrester puts it best when they say, “With the right practices and technology in place, you can encourage and enforce secure coding practices and developer accountability without sacrificing speed or quality.” Many application security solutions today education developers on the job. Forrester emphasizes the importance of choosing a tool that has brief, integratable training modules that fit right into the testing tools that the developers are using.
Another important tool to equip your developers with is a software composition analysis (SCA) tool. Developers aren’t going to stop using open source components any time soon, and it’s crucial that they’re staying on top of all of the most recent open source vulnerabilities that have been discovered. If they happen to be using an insecure component from a vulnerable library, it could mean bad news for your organization if a cyberattacker attempts to exploit it. Veracode Software Composition Analysis alerts your developers of all of the new vulnerabilities that hit the news, and tells them if they’re using the vulnerable component so that they can go in and remediate the vulnerability as soon as possible.
Beyond tools, organizations can adopt practices like red team exercises to put their developers in the role of an attacker. Learning about hacking techniques will help change their mindsets to think about how an attacker might try to penetrate their code, and they’ll keep that in mind as they design applications down the road. Assigning developer security champions puts a security advocate on your product team, without having to convince all of your developers individually to devote themselves to security. A security champion can act as a liaison between your security team and developers, and they can help convey security priorities to their colleagues.
Developers are one of your first lines of defense against a potential cyberattack, and with applications being the most frequent attack vector for companies, getting your development teams to start coding securely should be priority number one. Developers may be responsible for application security, but security professionals need to actively work with them and make sure they have the tools they need to execute the task. Check out Forrester’s April 2019 report, “Show, Don’t Tell, Your Developers How To Write Secure Code,” and get on the path towards creating more secure code.
In August 2018, FireEye Threat Intelligence released a report exposing what we assessed to be an Iranian influence operation leveraging networks of inauthentic news sites and social media accounts aimed at audiences around the world. We identified inauthentic social media accounts posing as everyday Americans that were used to promote content from inauthentic news sites such as Liberty Front Press (LFP), US Journal, and Real Progressive Front. We also noted a then-recent shift in branding for some accounts that had previously self-affiliated with LFP; in July 2018, the accounts dropped their LFP branding and adopted personas aligned with progressive political movements in the U.S. Since then, we have continued to investigate and report on the operation to our intelligence customers, detailing the activity of dozens of additional sites and hundreds of additional social media accounts.
Recently, we investigated a network of English-language social media accounts that engaged in inauthentic behavior and misrepresentation and that we assess with low confidence was organized in support of Iranian political interests. In addition to utilizing fake American personas that espoused both progressive and conservative political stances, some accounts impersonated real American individuals, including a handful of Republican political candidates that ran for House of Representatives seats in 2018. Personas in this network have also had material published in U.S. and Israeli media outlets, attempted to lobby journalists to cover specific topics, and appear to have orchestrated audio and video interviews with U.S. and UK-based individuals on political issues. While we have not at this time tied these accounts to the broader influence operation we identified last year, they promoted material in line with Iranian political interests in a manner similar to accounts that we have previously assessed to be of Iranian origin. Most of the accounts in the network appear to have been suspended on or around the evening of 9 May, 2019. Appendix 1 provides a sample of accounts in the network.
The accounts, most of which were created between April 2018 and March 2019, used profile pictures appropriated from various online sources, including, but not limited to, photographs of individuals on social media with the same first names as the personas. As with some of the accounts that we identified to be of Iranian origin last August, some of these new accounts self-described as activists, correspondents, or “free journalist[s]” in their user descriptions. Some accounts posing as journalists claimed to belong to specific news organizations, although we have been unable to identify individuals belonging to those news organizations with those names.
Narratives promoted by these and other accounts in the network included anti-Saudi, anti-Israeli, and pro-Palestinian themes. Accounts expressed support for the Joint Comprehensive Plan of Action (JCPOA), commonly known as the Iran nuclear deal; opposition to the Trump administration’s designation of Iran’s Islamic Revolutionary Guard Corps (IRGC) as a Foreign Terrorist Organization; antipathy toward the Ministerial to Promote a Future of Peace and Security in the Middle East (a U.S.-led conference that focused on Iranian influence in the Middle East more commonly known as the February 2019 Warsaw Summit); and condemnation of U.S. President Trump’s veto of a resolution passed by Congress to end U.S. involvement in the Yemen conflict.
Figure 1: Sample tweets on the Trump administration’s designation of Iran’s IRGC as a Foreign Terrorist Organization
Interestingly, some accounts in the network also posted a small amount of messaging seemingly contradictory to their otherwise pro-Iran stances. For example, while one account’s tweets were almost entirely in line with Iranian political interests, including a tweet claiming that “iran has shown us that his nuclear program is peaceful [sic],” the account also posted a series of tweets directed at U.S. President Trump on Sept. 25, 2018, the same day that he gave a speech to the United Nations in which he excoriated the Iranian Government. The account called on Trump to attack Iran, using the hashtags #attack_Iran, #go_to_hell_Rouhani, #stop_sanctions, #UnitedNations, and #trump_speech; other accounts in the network, which likewise predominantly held pro-Iran stances, echoed these sentiments, using the same or similar hashtags. It is possible that these accounts were seeking to build an audience with views antipathetic to Iran that could then later be targeted with pro-Iranian messaging.
Apart from the narratives and messaging promoted, we observed several limited indicators that the network was operated by Iranian actors. For example, one account in the network, @AlexRyanNY, created in 2010, had only two visible tweets prior to 2017, one of which, from 2011, was in Persian and of a personal nature. Subsequently in 2017, @AlexRyanNY claimed in a tweet to be “an Iranian who supported Hillary” in a tweet directed at a Democratic political strategist. This account, using the display name “Alex Ryan” and claiming to be a Newsday correspondent, appropriated the photograph of a genuine individual also with the first name of Alex. We note that it is possible that the account was compromised from another individual or that it was merely repurposed by the same actor. Additionally, while most of the accounts in the network had their interface languages set to English, we observed that one account had its interface language set to Persian.
Impersonation of U.S. Political Candidates
Some Twitter accounts in the network impersonated Republican political candidates that ran for House of Representatives seats in the 2018 U.S. congressional midterms. These accounts appropriated the candidates’ photographs and, in some cases, plagiarized tweets from the real individuals’ accounts. Aside from impersonating real U.S. political candidates, the behavior and activity of these accounts resembled that of the others in the network.
For example, the account @livengood_marla impersonated Marla Livengood, a 2018 candidate for California’s 9th Congressional District, using a photograph of Livengood and a campaign banner for its profile and background pictures. The account began tweeting on Sept. 24, 2018, with its first tweet plagiarizing one from Livengood’s official account earlier that month:
Figure 2: Tweet by suspect account @livengood_marla, dated Sept. 24, 2018 (left); tweet by Livengood’s verified account, dated Sept. 1, 2018 (right)
The @livengood_marla account plagiarized a number of other tweets from Livengood’s official account, including some that referenced Livengood’s official account username:
Figure 3: Tweet by suspect account @livengood_marla, dated Sept. 24, 2018 (left); tweet by Livengood’s verified account, dated Sept. 3, 2018 (right)
The @livengood_marla account also tweeted various news snippets on both political and apolitical subjects, such as the confirmation of Brett Kavanaugh to the U.S. Supreme Court and the wedding of the UK’s Princess Eugenie and Jack Brooksbank, prior to segueing into promoting material more closely aligned with Iranian interests. For example, the account, along with others in the network, commemorated the United Nations’ International Day of the Girl Child with a photograph of emaciated children in Yemen, as well as narratives pertaining to the killing of Saudi journalist Jamal Khashoggi and Saudi Shiite child Zakaria al-Jaber, intended to portray Saudi Arabia in a negative light.
In another example, the account @ButlerJineea impersonated Jineea Butler, a 2018 candidate for New York’s 13th Congressional District, using a photograph of Butler for its profile picture and incorporating her campaign slogans into its background picture, as well as claiming in its Twitter bio to be a “US House candidate, NY-13” and linking to Butler’s website, jineeabutlerforcongress.com.
Figure 4: Suspect account @ButlerJineea (left); apparent legitimate, currently inactive account @Jineea4congress (right)
These and other accounts in the network plagiarized tweets from additional sources beyond the individuals they impersonated, including other U.S. politicians, about both political and apolitical topics.
Influence Activity Leveraged U.S. and Israeli Media
In addition to directly posting material on social media, we observed some personas in the network leverage legitimate print and online media outlets in the U.S. and Israel to promote Iranian interests via the submission of letters, guest columns, and blog posts that were then published. We also identified personas that we suspect were fabricated for the sole purpose of submitting such letters, but that do not appear to maintain accounts on social media. The personas claimed to be based in varying locations depending on the news outlets they were targeting for submission; for example, a persona that listed their location as Seattle, WA in a letter submitted to the Seattle Times subsequently claimed to be located in Baytown, TX in a letter submitted to The Baytown Sun. Other accounts in the network then posted links to some of these letters on social media.
The letters and columns, many of which were published in 2018 and 2019, but which date as far back as 2015, were mostly published in small, local U.S. news outlets; however, several larger outlets have also published material that we suspect was submitted by these personas (see Appendix 2). In at least two cases, the text of letters purportedly authored by different personas and published in different newspapers was identical or nearly identical, while in other instances, separate personas promoted the same narratives in letters published within several days of each other. The published material was not limited to letters; one persona, “John Turner,” maintained a blog on The Times of Israel website from January 2017 to November 2018, and wrote articles for the U.S.-based site Natural News Blogs from August 2015 to July 2018. The letters and articles primarily addressed themes or promoted stances in line with Iranian political interests, similar to the activity conducted on social media.
Figure 5: Sample letter published in Galveston County’s (Texas) The Daily News, authored by suspect persona Mathew O’Brien
We have thus far identified at least five suspicious personas that have had letters or other content published by legitimate news outlets. We surmise that additional personas exist, based on other investigatory leads.
“John Turner”: The John Turner persona has been active since at least 2015. Turner has claimed to be based, variously, in New York, NY, Seattle, WA, and Washington, DC. Turner described himself as a journalist in his Twitter profile, though has also claimed both to work at the Seattle Times and to be a student at Villanova University, claiming to be attending between 2015 and 2020. In addition to letters published in various news outlets, John Turner maintained a blog on The Times of Israel site in 2017 and 2018 and has written articles for Natural News Blogs. At least one of Turner’s letters was promoted in a tweet by another account in the network.
“Ed Sullivan”: The Ed Sullivan persona, which has on at least one occasion used the same headshot as that of John Turner, has had letters published in the Galveston County, Texas-based The Daily News, the New York Daily News, and the Los Angeles Times, including some letters identical in text to those authored by the “Jeremy Watte” persona (see below) published in the Texas-based outlet The Baytown Sun. Ed Sullivan has claimed his location to be, variously, Galveston and Newport News (Virginia).
“Mathew Obrien”: The Mathew Obrien persona, whose name has also been spelled “Matthew Obrien” and “Mathew O’Brien”, claimed in his Twitter bio to be a Newsday correspondent. The persona has had letters published in Galveston County’s The Daily News and the Athens, Texas-based Athens Daily Review; in those letters, his claimed locations were Galveston and Athens, respectively, while the persona’s Twitter account, @MathewObrien1, listed a location of New York, NY. At least one of Obrien’s letters was promoted in a tweet by another account in the network.
“Jeremy Watte”: Letters signed by the Jeremy Watte persona have been published in The Baytown Sun and the Seattle Times, where he claimed to be based in Baytown and Seattle, respectively. The texts of at least two letters signed by Jeremy Watte are identical to that in letters published in other newspapers under the name Ed Sullivan. At least one of his letters was promoted in a tweet by another account in the network.
“Isabelle Kingsly”: The Isabelle Kingsly persona claimed on her Twitter profile (@IsabelleKingsly) to be an “Iranian-American” based in Seattle, WA. Letters signed by Kingsly have appeared in The Baytown Sun and the Newport News Virginia local paper The Daily Press; in those letters, Kingsly’s location is listed as Galveston and Newport News, respectively. The @IsabelleKingsly Twitter account’s profile picture and other posted pictures were appropriated from a social media account of what appears to be a real individual with the same first name of Isabelle. At least one of Kingsly’s letters was promoted in a tweet by another account in the network.
Other Media Activity
Personas in the network also engaged in other media-related activity, including criticism and solicitation of mainstream media coverage, and conducting remote video and audio interviews with real U.S. and UK-based individuals while presenting themselves as journalists. One of those latter personas presented as working for a mainstream news outlet.
Criticism/Solicitation of Media Coverage
Accounts in the network directed tweets at mainstream media outlets, calling on them to provide coverage of topics aligned with Iranian interests or, alternatively, criticizing them for insufficient coverage of those topics. For example, we observed accounts criticizing media outlets over their lack of coverage of the killing of Shiite child Zakaria al-Jaber in Saudi Arabia, as well as Saudi Arabia’s conduct in the Yemen conflict. While such activity might have been intended to directly influence the media outlets’ reporting, the accounts may have also been aiming to reach a wider audience by tweeting at outlets with a large following that woud see those replies.
Figure 6: Sample tweets by suspect accounts calling on mainstream media outlets to increase their coverage of alleged Saudi activity in the Yemen conflict
“Media” Interviews with Real U.S., UK-Based Individuals
Accounts in the network, under the guise of journalist personas, also solicited various individuals over Twitter for interviews and chats, including real journalists and politicians. The personas appear to have successfully conducted remote video and audio interviews with U.S. and UK-based individuals, including a prominent activist, a radio talk show host, and a former U.S. Government official, and subsequently posted the interviews on social media, showing only the individual being interviewed and not the interviewer. The interviewees expressed views that Iran would likely find favorable, discussing topics such as the February 2019 Warsaw summit, an attack on a military parade in the Iranian city of Ahvaz, and the killing of Jamal Khashoggi.
The provenance of these interviews appear to have been misrepresented on at least one occasion, with one persona appearing to have falsely claimed to be operating on behalf of a mainstream news outlet; a remote video interview with a US-based activist about the Jamal Khashoggi killing was posted by an account adopting the persona of a journalist from the outlet Newsday, with the Newsday logo also appearing in the video. We did not identify any Newsday interview with the activist in question on this topic. In another instance, a persona posing as a journalist directed tweets containing audio of an interview conducted with a former U.S. Government official at real media personalities, calling on them to post about the interview.
We are continuing to investigate this and potentially related activity that may be being conducted by actors in support of Iranian interests. At this time, we are unable to provide further attribution for this activity, and we note the possibility that the activity could have been designed for alternative purposes or include some small percentage of authentic behavior. However, if it is of Iranian origin or supported by Iranian state actors, it would demonstrate that Iranian influence tactics extend well beyond the use of inauthentic news sites and fake social media personas, to also include the impersonation of real individuals on social media and the leveraging of legitimate Western news outlets to disseminate favorable messaging. If this activity is being conducted by the same or related actors as those responsible for the Liberty Front Press network of inauthentic news sites and affiliated social media accounts that we exposed in August 2018, it may also suggest that these actors remain undeterred by public exposure or by social media platforms’ shutdowns of their accounts, and that they continue to seek to influence audiences within the U.S. toward positions in line with Iranian political interests.
Appendix 1: Sample Twitter accounts identified in this network, currently suspended.
Free journalist #resist
In search of reality.
It’s our duty to leave our Country-to our children-better than we found it
Don't think too hard, just have fun with life...
I drink lots of tea...
mad at politicians
In favor of sick minds
North Carolina, USA
mother of two
Daughter of best parent.
Do your best, just let your success shows your efforts.
Student at Harvard college.
somehow into politics.
I love gym
Angel of human
I do into beauty and humanity
Wife, mom of tow sons, student,
in favor of peace.
New York, USA
sports and into Music and gym
Student in college studying International law
A free person from everywhere
I'm somehow into politics
New York, USA
Correspondent at https://t.co/3hxSgtkuXh. 🎥📸Freelance Journalist. ➡️➡️oppose War and Brutality 💆♂️I was born in Beirut
In search of peace.
Really into politics and justice.
Love US and other countries.
New York, USA
high educated free journalist in favor of politics
in search of reality
Middle East issues
New York, USA
US House candidate, NY-13
U.S. Congressional Candidate for NY District 13 serving Harlem, Washington Heights and Western Bronx.US
save the US
Elizabeth Warren not for 2020
Single. Iranian-American. Lifestyle.And a tad of politics. @ewarren not for 2020.
A single boy,@Newsday correspondent , interested in news Scientist🔬. Animal 🐘 and Nature lover🌲, hiker and backpacker♍ .
New York, NY
The fight for human rights never sleeps, standing up for human rights across the world, wherever justice, freedom, fairness and truth are denied.
New York, USA
follow me to get follow back
follow me to get follow back
Las Vegas, NV
No Magats 🚫, 🔥 Anti War & Hate, Pro Equality, Humanity, Humor & Sensible Gun Reform
New York, USA
Journalist. RTs Are not necessarily endorsements. All views my own. #Resist
New Yorker, @Newsday correspondent.
You don't have a soul. You are a Soul. You have a body.
New York, USA
Table 1: Sample Twitter accounts identified in this network
Appendix 2: Sample letters published in news outlets submitted by personas identified in this network, August 2018 to April 2019.
Author’s Listed Location
Aug. 1, 2018
The Baytown Sun (baytownsun.com)
Title: “Trump’s wall just a vanity project”
The letter argues against the Trump administration’s proposed border wall with Mexico. The text of the letter is identical to that published in Galveston County’s The Daily News (galvnews.com) on Aug. 4, 2018, three days later.
Aug. 4, 2018
Galveston County’s The Daily News (galvnews.com)
Title: “Trump cares not one wit about effects of shutdown”
The text of the letter is identical to that published in The Baytown Sun on Aug. 1.
Oct. 11, 2018
The Baytown Sun (baytownsun.com)
Title: “Time to fight for it”
The letter, written from the point of view of an individual aligned with the U.S. political left, calls on individuals to fight for justice.
Oct. 23, 2018
New York Daily News (nydailynews.com)
Title: “Don’t shrug off Khashoggi’s murder”
The letter argues that “the most fitting and best memorial to Jamal Khashoggi,” a Saudi journalist who was murdered in the Saudi embassy in Istanbul, “would be the swift end to the war in Yemen.”
Oct. 23, 2018
Los Angeles Times (latimes.com)
Title: “Don’t shrug off Khashoggi’s murder”
The letter is identical to that published in the New York Daily News on the same day.
Nov. 27, 2018
New York, NY
Times of Israel (blog.timesofisrael.com)
Title: “Saudi Arabia’s foreign policy is failing”
The letter states that the murder of Jamal Khashoggi is “the latest in a series of foreign policy blunders” committed by the Saudi Crown Prince Mohammed Bin Salman.
Nov. 30, 2018
New York, NY
Times of Israel (blog.timesofisrael.com)
Title: “Relations with Israel will not benefit Gulf states”
The letter argues that the Gulf states will not benefit from normalized relations with Israel, stating that “the Arab street” would not support those relations and that such a move would be risky for “the Gulf’s unelected rulers.”
Dec. 26, 2018
The Baytown Sun (baytownsun.com)
Title: “Wild West sheriff”
The letter argues that Trump is not an aberration in U.S. history, but rather an ideological descendant of various U.S. historical currents; the article also calls him “an authoritarian, racist madman.”
Jan. 18, 2019
Seattle Times (seattletimes.com)
Title: “ISIS’ ideology not defeated”
The letter, written in response to an article about Americans killed by an ISIS suicide bomber in Syria, asserts that the Islamic extremist ideology espoused by the terrorist group remains undefeated.
March 1, 2019
The Baytown Sun (baytownsun.com)
Title: “Sins of Saudi Arabia”
The letter is condemnatory of Saudi Arabia, citing its actions in the Yemen conflict, the killing of Jamal Khashoggi, the killing of Zakaria al-Jaber, a Shiite child, in Medina, and the imprisonment of Saudi women activists. The letter also defends Iran, stating that it is not responsible for similar crimes.
April 9, 2019
Galveston County’s The Daily News (galvnews.com)
Title: “Sanctioning Islamic corps is pure madness”
The letter condemns the Trump administration’s designation of the IRGC as a Foreign Terrorist Organization and claims that Trump is seeking to start a war with Iran.
April 11, 2019
Athens Daily Review (athensreview.com)
Title: “Trump, Bolton trying to start war with Iran”
The letter, similar to the April 9 letter published in Galveston County’s The Daily News, claims that Trump and Bolton are trying to start a war with Iran to use the war in Trump’s 2020 presidential campaign, while disregarding the alleged crimes of Saudi Arabia.
April 11, 2019
Daily Press (dailypress.com)
Title: “An uneasy path – Re; Recent Iran sanction reports”
The letter also argues that Trump and Bolton are seeking to start a war with Iran toward political ends.
April 19, 2019
The Baytown Sun (baytownsun.com)
Title: “Escalating hostility toward Iran”
The letter argues that the election of Trump to the U.S. presidency has set the U.S. on a dangerous course and condemns the U.S. withdrawal from the Iran nuclear deal (JCPOA), stating that “the ayatollahs have welcomed this abrogation of honor on Trump’s part.”
April 23, 2019
Galveston County’s The Daily News (galvnews.com)
Title: “Escalating hostility toward Iran is wrong, dangerous”
The text of this letter is nearly identical to that authored by Jeremy Watte and published in The Baytown Sun on April 19, excepting changes made in several sentences.
Table 2: Sample letters published in news outlets submitted by personas in this network
Greene King said the hackers were able to access:
- email address
- user ID
- encrypted password
- post code
No details were provided on how the hackers were able to compromise the gift card website, but there is a clue within Greene King's email statement, which suggests their website had security vulnerabilities which were fixable, "we have taken action to prevent any further loss of personal information"
This is not the first data breach reported by Greene King in recent times, in November 2016 2,000 staff bank details were accidentally leaked.
Greene King Personal Data Compromise Email to Customers
I am writing to inform you about a cyber-security breach affecting our website gkgiftcards.co.uk.
Suspicious activity was discovered on 14th May and a security breach was confirmed on 15th May. No bank details or payment information were accessed. However, the information you provided to us as part of your gift card registration was accessed. Specifically, the hackers were able to access your name, email address, user ID, encrypted password, address, post code and gift card order number. Whilst your password was encrypted, it may still be compromised. It is very important that you change your password on our website, and also any other websites where this password has been used.
When you next visit our website, using the following link (https://www.gkgiftcards.co.uk/user) you will be prompted to change your password. As a consequence of this incident, you may receive emails or telephone calls from people who have obtained your personal information illegally and who are attempting to obtain more personal information from you, especially financial information.
This type of fraud is known as 'phishing'. If you receive any suspicious emails, don't reply. Get in touch with the organisation claiming to have contacted you immediately, to check this claim. Do not reply to or click any links within a suspicious email and do not dial a suspicious telephone number given to you by someone who called you. Only use publicly listed contact details, such as those published on an organisation's website or in a public telephone directory, to contact the organisation to check this claim. At this stage of our investigation, we have no evidence to suggest anyone affected by this incident has been a victim of fraud but we are continuing to monitor the situation. We have reported the matter to the Information Commissioner's Office (ICO).
As soon as we were made aware of the incident, our immediate priority was to close down any exposure, which has been done, and then confirm which customer accounts have been affected. I recognise that this is not the sort of message you want to receive from an organisation which you have provided your personal information to. I want to apologise for what has happened, and reassure you that we have taken action to prevent any further loss of personal information, and to limit any harm which might otherwise occur as a result of this incident.
Chief Commercial Officer of Greene King Plc.
- Change your Greene King account password immediately, use a unique and strong password.
- Ensure you have not used the same Greene King credentials (i.e. your email address with the same password) on any other website or app, especially with your email account, and with banking websites and apps. Consider using a password manager to assist you in creating and using unique strong passwords with every website and application you use.
- Always use Multi-factor Authentication (MFA) when offered. MFA provides an additional level of account protection, which protects your account from unauthorised access should your password become compromised.
- Check https://haveibeenpwned.com/ to see if your email and password combination is known to have been compromised in a past data breach.
- Stay alert for customised messages from scammers, who may use your stolen personal information to attempt to con you, by email (phishing), letter and phone (voice & text). Sometimes criminals will pretend to represent the company breached, or another reputable organisation, using your stolen personal account information to convince you they are legit.
- Never click on links, open attachments or reply to any suspicious emails. Remember criminals can fake (spoof) their 'sender' email address and email content to replicate a ligament email.
You have superstar employees who run your business like it’s their own. They use new apps to collaborate with coworkers, vendors, and customers to get work done when it needs to get done. They’re moving your business closer and closer to the cloud. Sounds fantastic! Let them do their thing! But what information is being shared? What apps are they using? Are they secure? Are partners or customers receiving sensitive data that’s not encrypted? Here are a few things to keep in mind as your business accelerates to the cloud.
Businesses Are Adopting Cloud Services Faster Than They Are Being Secured
Employees seeking new cloud services can help you transform the way business is done and improve engagement with customers, partners, and other employees. Most employees are first adopters who are trying new apps to do their jobs in the most efficient way possible. But before you know it, your IT department could become overwhelmed with cloud adoption. This means your organization will inevitably deal with shadow IT as your employees begin using unsanctioned cloud services.
Data Could Be Leaked, Leading to Financial, Reputational, IO, and Compliance Exposure
Do you know what your employees are doing with your business’s data? This is where shadow IT becomes a factor. Not all security controls used today were built with the cloud in mind, especially when it comes to BYOD and IoT. On-premises security products alone can’t provide effective visibility and protection in a hybrid IT world. In a recent McAfee survey, we found that the average organization thinks they use 30 cloud services, but in reality they use 1,935. This disparity is shadow IT—and it’s expanding your attack surface. This leaves your company more exposed to cyberthreats through the use of potentially high-risk cloud services without complete IT visibility or control. Don’t let the risk of shadow IT disrupt your business. Visibility into your organization’s cloud adoption and the devices that connect to these services is a critical step for mitigating the risk of data breaches, non-compliance, and loss of reputation due to shadow IT.
Move at the Speed of Business Without Compromising Security
The future of your company depends on growth and flexibility. Don’t pause on innovation and progress. Let your employees use the devices and apps they have and gain peace of mind knowing that your valuable information is secure. You can place security’s architectural control points on the places where employees work—from device to cloud and in between. You can allow restricted usage of services through application control and still prevent data exfiltration. A cloud access security broker (CASB) can help detect and block instances of sensitive data being uploaded to these shadow IT services.
You can accelerate your transformation to the cloud with IT security as a business enabler. Use security operations—with threat intelligence, management, analytics, automation, and orchestration— as the glue to identify the most advanced threats and crossover attacks. A CASB can be integrated seamlessly into IaaS, PaaS, and SaaS environments to secure cloud services as they are being adopted. Let your employees shine and take your business to the next level backed by an IT department tooled with industry-leading visibility and control provided by our CASB solution, McAfee MVISION Cloud.
Watch our video to understand how using McAfee can enable you to accelerate your business, reducing the risk of transformative technologies like the cloud and all the devices employees use to access data.
The post Are Your Employees Using Your Data in the Shadows? appeared first on McAfee Blogs.
What phone app has over 150 million active users and more than 14 million uploads every day? You might guess Facebook, Instagram, or Snapchat, but you’d be wrong. Meet TikTok — a video app kids are flocking to that is tons of fun but also carries risk.
What Is It?
TikTok is a free social media app that allows users to create and share short 15-second videos set to favorite music. If your child was a fan of Musical.ly, then he or she is probably active on TikTok since Musical.ly shut down last year and moved all of its users to TikTok. Kids love the app because it’s got all the social perks — music, filters, stickers — and the ability to amass likes and shares (yes, becoming TikTok-famous is an aspiration for some).
There are a lot of positive things about this app. It’s filling the void of the sorely missed Vine app in that it’s a fun hub for video creation and peer connection. Spending time on TikTok will make you laugh out loud, sing, and admire the degree of creativity so many young users put into their videos. You will see everything from heartfelt, brave monologues, to incredible athletic stunts, to hilarious, random moments in the lives of teens. It’s serious fun.
Another big positive is the app appears to take Digital Wellbeing (tools in the app that encourage screen time), privacy, and online safety seriously. Its resources tab is rich with tips for both parents and kids.
The (Potential) Downside
As with any other social app, TikTok carries inherent risks, as reported by several news sources, including ABC.
For instance, anyone can view your child’s videos, send a direct message, and access their location information. And, while TikTok requires that users are at least 13 years old to use the app and anyone under 18 must have parent’s approval, if you browse the app, you’ll quickly find that plenty of preteens are using it. A predator could easily create a fake account or many accounts to strike up conversations with minors.
Another danger zone is inappropriate content. While a lot of TikTok content is fun and harmless, there’s a fair share of the music that includes explicit language and users posting content that should not be viewed by a young audience.
And, wherever there’s a public forum, there’s a risk of cyberbullying. When a TikTok user posts a video, that content instantly becomes open for public comment or criticism and dialogue can get mean.
Talking Points for Families
Most social media apps have an inherent risk factor because the world wide web is just that — much of the planet’s population in the palm of your child’s hand. Different age groups and kids will use apps differently. So, when it comes to apps, it’s a good idea to monitor how your child uses each app and tailor conversations from there.
- Download the app. If your child uses TikTok, it’s a good idea to download the app too. Look around inside the community. Analyze the content and the culture. Are the accounts your child follows age appropriate? Are the comments and conversations positive? Does your child know his or her followers? Is your child posting appropriately?
- Talk about the risks. Spend time with your child and watch how he or she uses TikTok. Let them teach you why they love it. Encourage creativity and fun, but don’t hesitate to point out danger zones and how your child can avoid them.
- Monitor direct messages. This may seem invasive, but a lot of the safety threats to your child take place behind the curtain of the public feed in direct messages. Depending on the age of your child (and the established digital ground rules of your family) consider requiring access to his or her account.
- Adjust settings. Make sure to click account settings to ‘private’ so only people your child knows can access his or her content and send direct messages. Also, turn off location services and consider getting comprehensive security software for all family devices.
Apps are where the fun is for kids so you can bet your child will at least check out buzz-worthy platforms like TikTok. They may browse, or they may become content creators. Your best social monitoring tool is to keep an open dialogue with your child. Keep talking with your kids about what’s going on in their digital life — where they hang out, who their friends are, and what’s new. You may get some resistance but don’t let that stop you from doing all you can to keep your family safe online.
The post Are Your Kids Part of the TikTok App Craze? Here’s What Parents Need to Know appeared first on McAfee Blogs.
A couple of weeks ago, one famous lawyer blogged about an issue frequently discussed these days: the GDPR, one year later.
“The sky has not fallen. The Internet has not stopped working. The multi-million-euro fines have not happened (yet). It was always going to be this way. A year has gone by since the General Data Protection Regulation (Regulation (EU) 2016/679) (‘GDPR’) became effective and the digital economy is still going and growing. The effect of the GDPR has been noticeable, but in a subtle sort of way. However, it would be hugely mistaken to think that the GDPR was just a fad or a failed attempt at helping privacy and data protection survive the 21st century. The true effect of the GDPR has yet to be felt as the work to overcome its regulatory challenges has barely begun.”
It’s true that since that publication, the CNIL issued a €50 million fine against Google, mainly for lacking a clear and transparent privacy notice. But even that amount is purely negligible compared to the fact that just three months before that, Google had been hit with a new antitrust fine from the European Union, totaling €1.5 billion.
So, would we say that despite the sleepless nights making sure our companies were ready to comply with privacy, privacy pros are a bit disappointed by the journey? Or what should be our reaction, as privacy pros, when people around us ask, “Is your GDPR project over now?”
Well, guess what? Just like we said last year, it’s a journey and we are just at the start of this voyage. But in a world where cloud has become the dominant way to access IT services and products, it might be useful to highlight a project to which the GDPR gave birth, the EU Cloud Code of Conduct.
Of course, cloud existed prior to the GDPR and many regulators around the world had given guidance well before the GDPR on how to tackle the sensitivity and the risks arising from outsourcing IT services in the cloud. But before the GDPR, most cloud services providers (CSPs) were inclined to attempt to force their customers (the data controllers) to “represent and warrant” that they would act in compliance with all local data laws, and that they had all necessary consents from data subjects to pass data to the CSP processors pursuant to the services. This scenario, although not sensible under EU data protection law, was often successful, as the burden of non-compliance used to lie solely with the customer as controller.
The GDPR changed that in Recital 81, making processors responsible for the role they also play in protecting personal data. Processors are no longer outside the ambit of the law since “the controller should use only processors providing sufficient guarantees, in particular in terms of expert knowledge, reliability and resources, to implement technical and organizational measures which will meet the requirements of this Regulation, including for the security of processing.
The adherence of the processor to an approved code of conduct or an approved certification mechanism may be used as an element to demonstrate compliance with the obligations of the controller.”
With the GDPR, processors must implement appropriate technical and organizational security measures to protect personal data against accidental or unlawful destruction or loss, alteration, unauthorized disclosure, or access.
And adherence to an approved code of conduct may provide evidence that the processor has met these obligations, which brings us back to the Cloud Code of Conduct. One year after the GDPR, the EU Cloud Code of Conduct General Assembly reached a major milestone in releasing the latest Code version that has been submitted to the supervisory authorities.
The Code describes a set of requirements that enable CSPs to demonstrate their capability to comply with GDPR and international standards such as ISO 27001 and 27018. It also proves that the GDPR has marked a strong shift in the contractual environment.
In this new contractual arena, a couple of things are worth emphasizing:
- The intention of the EU Cloud Code of Conduct is to make it easier for cloud customers (particularly small and medium enterprises and public entities) to determine whether certain cloud services are appropriate for their designated purpose. It covers the full spectrum of cloud services (SaaS, PaaS, and IaaS), and has an independent governance structure to deal with compliance as well as an independent monitoring body, which is a requirement of GDPR.
- Compliance to the code does not in any way replace the binding agreement to be executed between CSPs and customers, nor does it replace the right for customer to request audits. It introduces customer-facing versions of policies and procedures that allow customers to know how the CSP works to comply with GDPR duties and obligations, including policies and processes around data retention, audit, sub-processing, and security.
The Code proposes interesting tools to enable CSPs to comply with the requirements of the GDPR. For instance, on audit rights, it states that:
“…the CSP may e.g. choose to implement a staggered approach or self-service mechanism or a combination thereof to provide evidence of compliance, in order to ensure that the Customer Audits are scalable towards all of its Customers whilst not jeopardizing Customer Personal Data processing with regards to security, reliability, trustworthiness, and availability.”
Another issue that often arises when negotiating cloud agreements: engaging a sub-processor is permissible under the requirements of the Code, but it requires—similar to the GDPR—a prior specific or general written authorization of the customer. A general authorization in the cloud services agreement is possible subject to a prior notice to the customer. More specifically, the CSP needs to put in place a mechanism whereby the customer is notified of any changes concerning an addition or a replacement of a sub-processor before that sub-processor starts to process personal customer data.
The issues highlighted above demonstrate the shift in the contractual environment of cloud services.
Where major multinational CSPs used to have a minimum set of contractual obligations coupled with minimum legal warranties, it is interesting to note how the GDPR has been able to drastically change the situation. Nowadays, the most important cloud players are happy to demonstrate their ability to contractually engage themselves. The more influential you are as a cloud player, the more you have the ability to comply with the stringent requirements of the GDPR.
 Eduardo Ustaran – The Work Ahead. https://www.linkedin.com/pulse/gdpr-work-ahead-eduardo-ustaran/
 Article 40 of the GDPR
 Article 5.6 of the Code
As Europe heads to the polls this weekend (May 23-26) to Members of the European Parliament (“MEPs”) representing the 28 EU Member States, the threat of disinformation campaigns aimed at voters looms large in the minds of politicians. Malicious players have every reason to try to undermine trust in established politicians, and push voters towards the political fringes, in an effort to destabilise European politics and weaken the EU’s clout in a tense geopolitical environment.
Disinformation campaigns are of course not a new phenomenon, and have been a feature of public life since the invention of the printing press. But the Internet and social media have given peddlers of fake news a whole new toolbox, offering bad actors unprecedented abilities to reach straight into the pockets of citizens via their mobile phones, while increasing their ability to hide their true identity.
This means that the tools to fight disinformation need to be upgraded in parallel. There is no doubt that more work is needed to tackle disinformation, but credit should also go to the efforts that are being made to protect citizens from misinformation during elections. The European Commission has engaged the main social media players in better reporting around political advertising and preventing the spread of misinformation, as a complement to the broader effort to tackle illegal content online. The EU’s foreign policy agency, the External Action Service, has also deployed a Rapid Alert System involving academics, fact-checkers, online platforms and partners around the world to help detect disinformation activities and sharing information among member states of disinformation campaigns and methods, to help them stay on top of the game. The EU has also launched campaigns to ensure citizens are more aware of disinformation and improving their cyber hygiene, inoculating them against such threats.
But adding cybersecurity research, analysis and intelligence trade craft to the mix is a vital element of an effective public policy strategy. And recently published research by Safeguard Cyber is a good example of how cybersecurity companies can help policymakers get to grips with disinformation.
The recent engagement with the European Commission think-tank, the EPSC, and Safeguard Cyber is a good example of how policymakers and cyber experts can work together, and we encourage more such collaboration and exchange of expertise in the months and years ahead. McAfee Fellow and Chief Scientist Raj Samani told more than 50 senior-ranking EU officials in early May that recent disinformation campaigns are “direct, deliberate attacks on our way of life” that seek to disrupt and undermine the integrity of the election process. And he urged policy makers that the way to address this is to use cyber intelligence and tradecraft to understand the adversary, so that our politicians can make informed decisions on how best to combat the very real threat this represents to our democracies. In practice this means close collaboration between best-in-class cybersecurity researchers, policymakers and social media players to gain a deeper understanding of the modus operandi of misinformation actors and respond more quickly.
As the sceptre of disinformation is not going to go away, we need a better understanding the actors involved, their motivations and most importantly, the rapidly changing technical tools they use to undermine democracy. And each new insight into tackling disinformation will be put to good use in elections later this year in Denmark, Norway, Portugal, Bulgaria, Poland and Croatia and Austria.
The post McAfee Playing an Ever Growing Role in Tackling Disinformation and Ensuring Election Security appeared first on McAfee Blogs.
Apps not only provide users with a form of entertainment, but they also help us become more efficient or learn new things. One such app is Game Golf, which comes as a free app, a paid pro version with coaching tools, or with a wearable analyzer. With over 50,000 downloads on Google Play, the app helps golfers track their on-course performance and use the data to help improve their game. Unfortunately, millions of golfer records from the Game Golf app were recently exposed to anyone with an internet connection, thanks to a cloud database lacking password protection.
According to researchers, this exposure consisted of millions of records, including details on 134 million rounds of golf, 4.9 million user notifications, and 19.2 million records in an activity feed folder. Additionally, the database contained profile data like usernames, hashed passwords, emails, gender, Facebook IDs, and authorization tokens. The database also contained network information for the company behind the Game Golf app, Game Your Game Inc., including IP addresses, ports, pathways, and storage information that cybercrooks could potentially exploit to further access the network. A combination of all of this data could theoretically provide cybercriminals with more information on the user, creating greater privacy concerns. Thankfully, the database was secured about two weeks after the company was initially notified of the exposure.
Although it is still unclear as to whether cybercriminals took a swing at this data, the magnitude of the information exposed by the app is cause for concern. Luckily, users can follow these tips to help safeguard their data:
- Change your passwords. If a cybercriminal got a hold of the exposed data, they could easily gain access into other online accounts if your login credentials were the same across different platforms. Err on the side of caution and change your passwords to something strong and unique for each account.
- Check to see if you’ve been affected. If you’ve used the Game Golf app and believe your data might have been exposed, use this tool to check or set an alert to be notified of other potential exposures.
- Secure your online profiles. Use a security solution like McAfee Safe Connect to encrypt your online activity, help protect your privacy by hiding your IP address, and better defend against cybercriminals.
The post Game Golf Exposure Leaves Users in a Sand Trap of Data Concerns appeared first on McAfee Blogs.
“Am I fat?”
“I am so depressed. Please help! I have been scoring less, my parents don’t understand me… my brilliant siblings treat me with disdain… my girlfriend has broken up with me….”
“Thanks! That’s why I feel a connect with you- you really get me (no one else does!) ….”
“I am closing my Facebook account for a while. I have fallen but I promise you I will rise again, like the Phoenix and will proudly stand before you once again. For now, I am going away. Please don’t try to contact me.”
“I hate you ********!”
All the above statements are variations of real ones posted on different social media platforms by adolescents. Do spare a few moments thinking about the posts- I spent days. What are your thoughts on these? How do you feel about getting a direct look into the hearts of these innocent and confused children?
It is both saddening and worrying that kids are turning to the Internet to find solutions to their problems. But what propels them to trust strangers?
Why do adolescents overshare online?
- Embarrassing topics: The would-be adults have many doubts about adult life that they feel shy or scared to discuss with their parents
- Emotional outbursts: Adolescence is a time for emotional upheavals and the kids find social media the best place to voice their thoughts
- False sense of privacy: As they are not connecting one-to-one in real life, children feel more comfortable discussing and sharing personal matters with online friends
- No fear of recrimination: This is one reason why they may not open up to adults at home
- Peer pressure: If most of their friends are venting on social media, your kids are likely to follow suit
Help! I am losing it!
Rule No. 1 for parents- don’t get worked up. You are not alone. Most parents go through this phase. Here are some tips to help you bond better with your tweens and teens.
- Be patient. You are the parent- always keep that in mind and don’t lose your cool. It will help you to mark your own space and earn you your child’s trust and respect
- Be in touch with their online lives. Be proactive and stay updated on the latest in the social media world so that you can interact in them in the same wavelength
- Monitor screentime and keep them engaged: If your child is withdrawn in real life but spends a lot of time online, you need to know why. Set internet usage limit. Remember, boredom and low self-confidence can lead a child to look for friends online, so ensure they are productively engaged offline.
- Help them to know their personal boundaries. They need to know and respect the limits you set on sharing
- LISTEN and listen well and only then offer your suggestions
- Keep communication channels open. Do not let a wall build up between you
- Be in touch with child’s friends and ensure your child has plenty of good time with them.
Tips to share with kids:
- Think before you lay bare your personal life online: Your blog or page isn’t your diary, for it’s not private. How would you feel if in a few years your seniors, professors or employers read this?
- Your online friends are strangers: Think. Do you want to share your deepest concerns or most private details with them? What if they out them? Can you handle the consequences?
- Share with real friends instead: Your online friends may not have any sense of loyalty towards you. Better to have one or two dependable real life ones, who you know well.
- Keep real identity private and maximize account security for all accounts: This is very important for your online safety. Secure your device with licensed security software and use two-factor authentication to secure accounts.
- Do not share passwords with anyone: Some things in life are best kept confined to self- including your passwords. Do not give remote access to your screen to online friends either.
Your parents are always there for you
This is what you need to impress upon your tweens and teens: Even though you may feel we do not understand, we do, for we were of your age once. We understand what you are going through. We may set rules that seem tough or discipline you when needed but that doesn’t mean we do not love you. We do what we think is best for you. And we are always there for you.
Before signing off, let me remind you of our cybersafety mantra that you need to repeat often at home: STOP. THINK. SHARE.
The post What Will You Do If You Find That Your Kids Are Sharing Their Troubles and Pains Online? appeared first on McAfee Blogs.
Encryption is fast becoming developer’s go-to solution for whatever data privacy or security worry ails you. Afraid of putting your credit card into a web form? Don’t worry. We’re using encryption. Unsure whether a website is authentic? Encryption has your back. The good news is, encryption algorithms do work some of the time. The math offers some protection against eavesdropping and a solid foundation for trust.
But all magic has its limits and the tricky thing with encryption is that it can be impossible to be certain where those limits are. The math can be so awe inspiring that we can only sit and stare with our mouths agape. It’s easy to get lost and just resign yourself to trusting it will all work out when the calculations are so complex.
- 22-May-2019 New Zero-Day Exploit for Bug in Windows 10 Task Scheduler
- 14-May-2019 ZombieLoad: Researchers discover New Hardware Vulnerability in Modern Intel Processors
- 14-May-2019 Prevent a worm by updating Remote Desktop Services
- 13-May-2019 WhatsApp voice calls used to inject Israeli spyware on phones
- 13-May-2019 Cisco Secure Boot Hardware Tampering Vulnerability
- Department of Commerce Announces the Addition of Huawei Technologies Co. Ltd to the Entity List
- Huawei's use of Android restricted by Google
- Google will work with Huawei for the next 90 days after US eases restrictions
- What happens to my Huawei smartphones and tablets now
- ARM memo tells staff to stop working with China’s tech giant
- China warns of investment blow to the UK over 5G ban
- Trump declares a national emergency over IT threats
- Trump says Huawei could be part of trade deal
- Huawei: Which countries are blocking its 5G technology?
- Huawei 'to go the extra mile' to reassure world on 5G spying
- Is Huawei in retreat?
- Huawei says billions of customers could be harmed by US sanctions
- Mike Pompeo warns the UK over Huawei 'security risks'
- Huawei's microchip vulnerability explained
- Vodafone Found Hidden Backdoors in Huawei Equipment
- Microsoft researchers find NSA-style backdoor in Huawei laptop
- Huawei the Company and the Security Risks Explained
- Theresa May has questions to answer over the Huawei scandal
- Sacked defence secretary denies security council leak on Huawei decision
- Vodafone denies Huawei Italy security risk
CRN®, a brand of The Channel Company, has announced it has named three Veracode leaders to its prestigious 2019 Women of the Channel list. The leaders on this annual list are from all areas of the IT channel ecosystem, representing technology suppliers, distributors, solution providers, and other IT organizations. Each honoree is recognized for her contributions to channel advocacy, channel growth and visionary leadership.
CRN editors choose the list from a multitude of channel leadership applicants and select the final honorees based on their professional accomplishments, demonstrated expertise, and ongoing dedication to the IT channel.
“CRN’s 2019 Women of the Channel list honors influential leaders who are accelerating channel growth through mutually-beneficial partnerships, incredible leadership, strategic vision, and unique contributions in their field,” said Bob Skelley, CEO of The Channel Company. “This accomplished group of leaders is driving channel success and we are proud to honor their achievements.”
The 2019 Women of the Channel list will be featured in the June issue of CRN Magazine and is featured online at www.CRN.com/WOTC.
Leslie Bois is responsible for all global indirect channel sales growth, and develops and executes Veracode’s global strategy to build a strong partner network that plays a significant role in the company’s go-to-market efforts. Under her leadership, Veracode’s channel pipeline has grown by three times over the past 12 months, and the company’s international business is growing rapidly in partnership with managed security service providers and partners in emerging markets in Asia, Latin America, Europe and the Middle East. Earlier this year, Bois was also recognized by CRN in its list of 2019 Channel Chiefs.
Lisa Quinby has more than 25 years of technology experience, and oversees a team of regional marketing professionals to drive marketing programs through high-touch integrated marketing to help meet sales objectives, including programs to, through and with partners. She is also responsible for the Veracode Partner Program and all related initiatives.
Robin Montague is responsible for collaborating with Veracode’s largest national partner to set and execute a joint strategy to drive incremental revenue. She enjoys mentoring people new to the channel, and is focused on training and enablement to differentiate Veracode’s partners and meet demand.
For more information on partnering with Veracode, please visit here.
The summer season is quickly approaching. Users will take to the skies, roads, and oceans to travel throughout the world for a fun family adventure. But just because users take time off doesn’t mean that their security should. So, with the season’s arrival, we decided to conduct a survey so to better understand users’ cybersecurity needs, as well as help them leave their cybersecurity woes behind while having some fun in the sun. That’s why we asked our users what they are most concerned about during the summer, so we can help them protect what really matters. Let’s see what they had to say.
Sharing the Fun
When it comes to vacations, we’re constantly taking and sharing snaps of amazing memories. What we don’t plan on sharing is the metadata embedded in each photo that can give away more than we intended. In fact, from our research we found that people are 3x more likely to be concerned about their Social Security number being hacked than their photos. Given the risk a compromised SSN poses for the potential of identity theft, it’s no surprise that respondents were more concerned about it. However, to keep the summer fun secure, it’s also important to keep travel photos private and only share securely.
Flying Safely and Securely
From a young age, we have been taught to keep our Social Security number close to the chest, and this is evident in how we protect SSNs. As a matter of fact, 88% of people would be seriously worried if their Social Security number was hacked. The best way to keep a Social Security number secure this summer – don’t share it when purchasing plane tickets or managing travel reservations. All you need to provide is a credit card and passport.
Making Smartphone Security #1
While on the go, travelers are often keenly aware of how exposed they are physically when carrying around credit cards, passports, suitcases, gadgets and more. However, they also need to think about securing their digital life, particularly their handheld devices. To keep personal photos protected while traveling this summer season, smartphone security must be a top priority. With nearly 40% of respondents concerned about sensitive personal photos being hacked, jet setters need to be proactive about security, not reactive. In fact, we’re reminded of just how important this fact is as we enter the month of June, Internet Safety Month. Just like your laptop or router, it’s vital to protect the personal data stored within a smartphone.
In order to help you stay secure this season, let’s put your travel security knowledge to the test.Note: There is a widget embedded within this post, please visit the site to participate in this post's widget.
The post 3 Things You Need to Know About Summer Cybersecurity appeared first on McAfee Blogs.
IT teams handle a great number of tasks that enable an organization to run smoothly. These include handling questions related to technical support for the company’s computer systems, software, and hardware, in addition to performing regular system updates and meeting periodic training needs. Yet research shows that helpdesks are also spending anywhere from 20-50 percent of their time dealing with password requests. Why are helpdesks so bogged down with password management tasks, and how can you free up their time while also prioritizing security?
A Never-Ending To-Do List
Password resets are costly, primarily because they are time-consuming when done manually. Every issue results in a support ticket that must be opened, filled out, and eventually closed. Then there is the act of resetting the password and confirming with the user that everything has been resolved, or if further troubleshooting is necessary. This process can take ten minutes or longer, which, at first, doesn’t seem like much. However, if you multiply that by the number of employees in a large organization, the labor time quickly begins to add up.
Additionally, since helpdesk staff know that lockouts prevent productivity, they tend to drop what they are doing and tend to the issue. Constant disruptions can prevent other tasks from getting done, or done well, simply because of the time it takes to settle back into work and remember where you were in the process.
Self-service password reset solutions like Core Password enable users to securely reset passwords themselves, freeing up helpdesk employee time and allowing them to work on other important IT needs. Additionally, these solutions not only maintain security, but also can improve it by enforcing reliable authentication, and consistent, stronger password policies. Detailed audit trails also help monitor for any abnormal activity, like mulitple resets.
Enabling Immediacy Through Self Service
While the helpdesk team may be shouldering the burden of reset tasks, they aren’t the only ones dealing with the problems that password issues cause. Users who are locked out of critical business applications and resources are severely limited in the work they can accomplish. Having to call the helpdesk or file a ticket puts that work on hold, disrupting the day of the user and using valuable labor time of a helpdesk employee.
Additionally, these lockouts and reset needs do not always occur during regular business hours. Depending on the industry or organization, helpdesk employees may not be on-call in off hours, meaning the user remains locked out until regular business hours resume. A self-service password reset solution eliminates these problems by allowing the user to reset their own password securely, and then get back to work.
Being locked out of critical applications like email is one thing, but getting locked out of your workstation doesn’t merely reduce productivity, it grinds it to a complete halt. An effective self-service solution needs to provide a way to reset a password even when the user is locked out of the workstation and stuck at the log in screen. Core Password provides several options for solving this problem. This includes a Windows Credential Provider, telephone-based keypad authentication, voice biometric authentication, and mobile phone apps. These solutions also enable users of non-Windows-based applications, like a shop floor terminal or other devices, to take advantage of password self-service.
Calculating Savings for Helpdesk Bandwidth and Budget
Using our budget calculator will provide a high-level overview of potential savings your organization can gain from implementing a self-service password reset solution. Using your own organizational values makes the output more meaningful, allowing you to get an understanding of how your business can benefit from a solution like Core Password.
Integrating Solutions for Holistic Identity Governance and Administration
IT teams greatly benefit from dedicating less time to constant password management, and employees no longer have to waste time waiting to get back to work. However, even more access management tasks can be streamlined, giving IT teams more time to tend to critical security issues while also ensuring employees have all the access they need to do their jobs.
Core Password is part of Access Assurance Suite, a bundle of robust Identity Governance and Administration (IGA) solutions that improves efficiency while also strengthening security. See how your organization can benefit with a personalized demo.
Get an overview of potential savings your organization can gain from implementing a self-service password reset solution with our budget calculator.
With summertime just around the corner, many people are planning vacations to enjoy some much-needed R&R or quality time with family and friends. Airbnb offers users a great alternative to a traditional hotel experience when they are looking to book their summer getaways. However, it appears that cybercriminals have used the popularity of the platform as a means to carry out their malicious schemes. Unfortunately, some Airbnb users are being scammed with fake rentals and account closures, whether they’re planning a trip or not.
While Airbnb stated that its platform was at no point compromised, a number of users have been charged for non-refundable reservations at fake destination homes and have had money taken out of their bank and PayPal accounts. Additionally, some users have had their account credentials changed without their permission, making it difficult to contact customer support about the fraudulent charges. For example, one user had three non-refundable reservations made in Ukraine on her account. Then, the reservations were canceled and her account was deleted all within a few minutes, making it impossible to reach Airbnb’s customer support. Luckily, the user was able to contact the vacation rental platform through the company’s Twitter account and receive a refund for the fraudulent charges.
Airbnb claimed that users’ accounts were accessed with correct login credentials that must have been “compromised elsewhere.” Regardless of how this scam originated, it’s important to take precautions when it comes to your online safety, so you can continue to use platforms like Airbnb to plan fun family vacations without any worries. Use these tips to help you stay secure:
- Avoid unauthorized sites. Cybercriminals often use fake websites to trick users into giving up their login credentials or financial information. Make sure that the web address doesn’t contain any odd-looking characters or words. For example, “Airbnb-bookings.com” is an invalid web address.
- Be wary of suspicious emails. If you receive an email asking you to click a link and enter personal data or one that contains a message that has a sense of urgency, proceed with caution. If the email isn’t from a legitimate, recognized Airbnb email address, it’s best to avoid interacting with the message altogether.
- Be careful where you click. When proceeding with an Airbnb transaction, make sure that you stay on their secure platform throughout the entire process, including the payment. Know that the company will never ask you to wire money or pay a host directly.
- Report issues. If you experience any suspicious listings, emails, or websites while trying to complete a booking, report this by emailing Airbnb at firstname.lastname@example.org.
- Use a security solution to surf the web safely. Using a tool like McAfee WebAdvisor can help you avoid dangerous websites and links and will warn you in the event that you do accidentally click on something malicious.
The post Don’t Let Airbnb Scams Stop Your Summer Travel Plans appeared first on McAfee Blogs.
A much overlooked but essential part in financially motivated (cyber)crime is making sure that the origins of criminal funds are obfuscated or made to appear legitimate, a process known as money laundering. ’Cleaning’ money in this way allows the criminal to spend their loot with less chance of being caught. In the physical world, for instance, criminals move large sums of cash into offshore accounts and create shell companies to obfuscate the origins of their funds. In the cyber underground where Bitcoin is the equivalent of cash money, it works a bit differently. As Bitcoin has an open ledger on which every transaction is recorded, it makes it a bit more challenging to obfuscate funds.
When a victim pays a criminal after being extorted with ransomware, the ransom transaction in Bitcoin and all additional transactions can then be tracked through the open ledger. This makes following the money a powerful investigative technique, but criminals have come up with an inventive method to make tracking more difficult; a mixing service.
A mixing service will cut up a sum of Bitcoins into hundreds of smaller transactions and mixes different transactions from other sources for obfuscation and will pump out the input amount, minus a fee, to a certain output address. Mixing Bitcoins that are obtained legally is not a crime but, other than the mathematical exercise, there no real benefit to it.
The legality changes when a mixing service advertises itself as a success method to avoid various anti-money laundering policies via anonymity. This is actively offering a money laundering service.
Last year advertisements for new mixing service called Bestmixer.io appeared on several Crypto currency related websites.
Judging by the article It sounded like it offered a service that could be considered money laundering or aid tax evasion.
Nature of the service
Bestmixer offered a very clear page on why someone should mix their cryptocurrency. On this page Bestmixer described the current anti-money laundering policies and how its service could help evade these policies by making funds anonymous and untraceable. Offering such a service is considered illegal in many countries.
Bestmixer’s explanation page, “why someone should mix bitcoins”.
A closer inspection of the Bestmixer site revealed that its website was hosted in the Netherlands. McAfee ATR contacted the Financial Advanced Cyber Team (FACT) of the Dutch anti-Fraud Agency (FIOD) of Bestmixer.io’s location. FACT is a team that is specialized in investigating the financial component of (cyber)crime. A yearlong International investigation led to the takedown of Bestmixer’s infrastructure today.
The post Cryptocurrency Laundering Service, BestMixer.io, Taken Down by Law Enforcement appeared first on McAfee Blogs.
Kudos to you if you are already implementing some level of application security; however, no matter what stage of AppSec maturity your organization is at, your program may still have room for improvement. Since 2006, we’ve been helping customers build out AppSec programs big and small, and in the process, we’ve learned a lot about what works and what doesn’t. To help you take your program to the next level, we’ve put together this guide of AppSec best practices.
The guide outlines a few areas where you can focus to make impactful improvements, including the following:
Take Advantage of Integrations
We recommend fixing vulnerabilities earlier in the SDLC by integrating with Veracode’s plugins, wrappers, and APIs. By installing available plug-ins or leveraging standard Veracode APIs and wrappers, you can establish seamless, reciprocal data exchanges between our platform and your development teams’ IDEs, build systems, bug tracking databases, and other systems. This allows you to ease the friction and silos among teams, reduce context-switch cost for developers, as well as help developers to discover and fix security findings earlier and faster, reducing cost and time.
Shift Left for Security Success
The more you can make code secure during development, the more you can maximize velocity later by reducing the number of security flaws that developers and operations must fix at the end of the process. By shifting security left, your teams can embed security into the software development process as they create code, checking for and removing vulnerabilities before they emerge instead of after the fact. According to NIST, flaws fixed during coding can reduce costs by as much as six times compared to making the exact same fix in production.
Vary Your Application Testing Methods
A strategy that’s overly reliant on just one testing type can leave software vulnerable while providing organizations with a false sense of security. Don’t believe claims that any single type of test is better than another; each has its own strengths and weaknesses. It takes a balanced approach to properly evaluate and mitigate risks. Understand the scope and coverage of each assessment technology to round out your program.
Always Be Scanning
There’s a strong correlation between how often an organization scans and how quickly they address their vulnerabilities. When creating a scan strategy, it’s important to prioritize frequent scans of small builds over one big scan of a large build. This allows your developers to make gradual, continuous improvements to the security of your software when the code is still fresh in their minds and easier to fix. It’s important to keep in mind scanning is just one piece of the puzzle; you must fix what you find in order to have an effective AppSec program.
Never Stop Learning
AppSec is always evolving, with new solutions and new vulnerabilities popping up regularly. And with the increased speed of development, plus security shifting “left,” developers need to catch security-related defects on their own as often as possible. But, most developers have had no opportunities to learn secure coding, in school or on the job. Education and training can provide some of your greatest security ROI: According to our research, eLearning improved developer fix rates by 19 percent while remediation coaching improved fix rates by 88 percent.
We know that AppSec isn’t a one-size-fits-all program; however, from our observations, these are some of the common best practices implemented by successful AppSec programs. For more tips on how to strengthen your AppSec program, read our Application Security Best Practices Handbook.
During Microsoft’s May Patch Tuesday cycle, a security advisory was released for a vulnerability in the Remote Desktop Protocol (RDP). What was unique in this particular patch cycle was that Microsoft produced a fix for Windows XP and several other operating systems, which have not been supported for security updates in years. So why the urgency and what made Microsoft decide that this was a high risk and critical patch?
According to the advisory, the issue discovered was serious enough that it led to Remote Code Execution and was wormable, meaning it could spread automatically on unprotected systems. The bulletin referenced well-known network worm “WannaCry” which was heavily exploited just a couple of months after Microsoft released MS17-010 as a patch for the related vulnerability in March 2017. McAfee Advanced Threat Research has been analyzing this latest bug to help prevent a similar scenario and we are urging those with unpatched and affected systems to apply the patch for CVE-2019-0708 as soon as possible. It is extremely likely malicious actors have weaponized this bug and exploitation attempts will likely be observed in the wild in the very near future.
Vulnerable Operating Systems:
- Windows 2003
- Windows XP
- Windows 7
- Windows Server 2008
- Windows Server 2008 R2
Worms are viruses which primarily replicate on networks. A worm will typically execute itself automatically on a remote machine without any extra help from a user. If a virus’ primary attack vector is via the network, then it should be classified as a worm.
The Remote Desktop Protocol (RDP) enables connection between a client and endpoint, defining the data communicated between them in virtual channels. Virtual channels are bidirectional data pipes which enable the extension of RDP. Windows Server 2000 defined 32 Static Virtual Channels (SVCs) with RDP 5.1, but due to limitations on the number of channels further defined Dynamic Virtual Channels (DVCs), which are contained within a dedicated SVC. SVCs are created at the start of a session and remain until session termination, unlike DVCs which are created and torn down on demand.
It’s this 32 SVC binding which CVE-2019-0708 patch fixes within the _IcaBindVirtualChannels and _IcaRebindVirtualChannels functions in the RDP driver termdd.sys. As can been seen in figure 1, the RDP Connection Sequence connections are initiated and channels setup prior to Security Commencement, which enables CVE-2019-0708 to be wormable since it can self-propagate over the network once it discovers open port 3389.
Figure 1: RDP Protocol Sequence
The vulnerability is due to the “MS_T120” SVC name being bound as a reference channel to the number 31 during the GCC Conference Initialization sequence of the RDP protocol. This channel name is used internally by Microsoft and there are no apparent legitimate use cases for a client to request connection over an SVC named “MS_T120.”
Figure 2 shows legitimate channel requests during the GCC Conference Initialization sequence with no MS_T120 channel.
Figure 2: Standard GCC Conference Initialization Sequence
However, during GCC Conference Initialization, the Client supplies the channel name which is not whitelisted by the server, meaning an attacker can setup another SVC named “MS_T120” on a channel other than 31. It’s the use of MS_T120 in a channel other than 31 that leads to heap memory corruption and remote code execution (RCE).
Figure 3 shows an abnormal channel request during the GCC Conference Initialization sequence with “MS_T120” channel on channel number 4.
Figure 3: Abnormal/Suspicious GCC Conference Initialization Sequence – MS_T120 on nonstandard channel
The components involved in the MS_T120 channel management are highlighted in figure 4. The MS_T120 reference channel is created in the rdpwsx.dll and the heap pool allocated in rdpwp.sys. The heap corruption happens in termdd.sys when the MS_T120 reference channel is processed within the context of a channel index other than 31.
Figure 4: Windows Kernel and User Components
The Microsoft patch as shown in figure 5 now adds a check for a client connection request using channel name “MS_T120” and ensures it binds to channel 31 only (1Fh) in the _IcaBindVirtualChannels and _IcaRebindVirtualChannels functions within termdd.sys.
Figure 5: Microsoft Patch Adding Channel Binding Check
After we investigated the patch being applied for both Windows 2003 and XP and understood how the RDP protocol was parsed before and after patch, we decided to test and create a Proof-of-Concept (PoC) that would use the vulnerability and remotely execute code on a victim’s machine to launch the calculator application, a well-known litmus test for remote code execution.
Figure 6: Screenshot of our PoC executing
For our setup, RDP was running on the machine and we confirmed we had the unpatched versions running on the test setup. The result of our exploit can be viewed in the following video:
There is a gray area to responsible disclosure. With our investigation we can confirm that the exploit is working and that it is possible to remotely execute code on a vulnerable system without authentication. Network Level Authentication should be effective to stop this exploit if enabled; however, if an attacker has credentials, they will bypass this step.
As a patch is available, we decided not to provide earlier in-depth detail about the exploit or publicly release a proof of concept. That would, in our opinion, not be responsible and may further the interests of malicious adversaries.
- We can confirm that a patched system will stop the exploit and highly recommend patching as soon as possible.
- Disable RDP from outside of your network and limit it internally; disable entirely if not needed. The exploit is not successful when RDP is disabled.
- Client requests with “MS_T120” on any channel other than 31 during GCC Conference Initialization sequence of the RDP protocol should be blocked unless there is evidence for legitimate use case.
It is important to note as well that the RDP default port can be changed in a registry field, and after a reboot will be tied the newly specified port. From a detection standpoint this is highly relevant.
Figure 7: RDP default port can be modified in the registry
Malware or administrators inside of a corporation can change this with admin rights (or with a program that bypasses UAC) and write this new port in the registry; if the system is not patched the vulnerability will still be exploitable over the unique port.
McAfee NSP customers are protected via the following signature released on 5/21/2019:
0x47900c00 “RDP: Microsoft Remote Desktop MS_T120 Channel Bind Attempt”
If you have any questions, please contact McAfee Technical Support.
The post RDP Stands for “Really DO Patch!” – Understanding the Wormable RDP Vulnerability CVE-2019-0708 appeared first on McAfee Blogs.
Is the Personal Data on Your Smartphone Vulnerable? Listen to Find Out: Used for everything from banking and taking pictures, to navigating, streaming, and connecting, mobile devices are a treasure trove of sensitive personal data. On the latest episode of “Hackable?” the team investigates how secure that data really is by inviting a white-hat to try and remotely penetrate our host Geoff’s smartphone. Listen now on Apple Podcasts and learn if one errant click could expose everything, including your deleted photos.
Businesses everywhere are looking to cloud solutions to help expedite processes and improve their data storage strategy. All anyone is talking about these days is the cloud, seemingly dwindling the conversation around individual devices and their security. However, many don’t realize these endpoint devices act as gateways to the cloud, which makes their security more pressing than ever. In fact, there is a unique relationship between endpoint security and cloud security, making it crucial for businesses to understand how this dynamic affects information security overall. Let’s explore exactly how these two are intertwined and how exactly endpoint security can move the needle when it comes to securing the cloud.
Between public cloud, private cloud, hybrid cloud, and now multi-cloud, the cloud technology industry is massive and showing zero signs of slowing down. Adoption is rampant, with the cloud market expected to achieve a five-year compound annual growth rate (CAGR) of 22.5%, with public cloud services spending reaching $370 billion in 2022. With cloud adoption drawing so much attention from businesses, it’s as important as ever that enterprises keep security top of mind.
This need for security is only magnified by the latest trend in cloud tech – the multi-cloud strategy. With modern-day businesses having such a diverse set of needs, many have adopted either a hybrid or multi-cloud strategy in order to effectively organize and store a plethora of data – 74 percent of enterprises, as a matter of fact. This has many security vendors and personnel scrambling to adjust security architecture to meet the needs of the modern cloud strategy. And though all businesses must have an effective security plan in place that compliments their cloud architecture, these security plans should always still consider how these clouds can become compromised through individual gateways, or, endpoint devices.
The Relationship Between Endpoint and Cloud
The cloud may be a virtual warehouse for your data, but every warehouse has a door or two. Endpoint devices act as doors to the cloud, as these mobile phones, computers, and more all connect to whichever cloud architecture an organization has implemented. That means that one endpoint device, if misused or mishandled, could create a vulnerable gateway to the cloud and therefore cause it to become compromised. Mind you – endpoint devices are not only gateways to the cloud, but also the last line of defense protecting an organization’s network in general.
Endpoint is not only relevant in the world of cloud – it has a direct impact on an organization’s cloud – and overall – security. A compromised endpoint can lead to an exposed cloud, which could make for major data loss. Businesses need to therefore put processes into place that outline what assets users put where and state any need-to-knows they should have top of mind when using the cloud. Additionally, it’s equally important every business ensures they make the correct investment in cloud and endpoint security solutions that perfectly complement these processes.
Ensuring Security Strategy Is Holistic
As the device-to-cloud cybersecurity company, we at McAfee understand how important the connection is between endpoint and cloud and how vital it is businesses ensure both are secured. That’s why we’ve built out a holistic security strategy, offering both cloud security solutions and advanced endpoint products that help an organization cover all its bases.
If your business follows a holistic approach to security – covering every endpoint through to every cloud – you’ll be able to prevent data exposures from happening. From there, you can have peace of mind about endpoint threats and focus on reaping the benefits of a smart cloud strategy.
DevSecOps can be challenging for many organizations when you consider all the areas of the DevOps process that require security testing. Organizations that begin to shift security “left” often find significant gaps in the security of infrastructure and operational components that are now integrated into the development process. Many of the technologies being used in DevOps are also very new to most organizations and are more recently starting to become “mainstream.” For example, we’re seeing more customers adopting microservices, utilizing cloud storage through Amazon S3, MongoDB, and Elasticsearch, deploying applications using containers, and managing those containers with newer orchestration technology like Kubernetes.
These new technologies allow faster development, but also come with the side effect of introducing a new attack surface and different types of vulnerabilities. Like any new technology, systems within a DevOps environment are often deployed insecurely and misconfigured. This makes the requirement to conduct security testing on the DevOps environment more important than ever. Moreover, what about the developers themselves from a security awareness perspective? What might they be discussing with peers on online forums, leaving in code repositories, or other areas on the Internet that may make their applications and the organization more susceptible to targeted phishing attacks, data leaks, and breaches that we hear about in the news on almost a daily basis?
What Is Veracode DevOps Penetration Testing?
Automating security testing is a key concept when building out a DevOps process and should not be overlooked. However, there is still a need for penetration testing in a DevOps environment. Penetration testing provides something that automation cannot -- the attacker’s perspective.
Building upon our strong application penetration testing service and highly skilled team, Veracode DevOps Penetration Testing provides testing above and beyond the application to include the operations and infrastructure components of applications. Technologies that can be in scope for this type of testing include, but are not limited to:
- Containers like Docker and Kubernetes orchestration
- Microservices and related interactions
- CI tool environments like Hudson and Jenkins
- Cloud infrastructure (AWS, Azure) and cloud storage databases
- Network infrastructure related to application deployment and configuration management
The Importance of Open Source Intelligence and DevOps
Veracode DevOps Penetration Testing also provides Open Source Intelligence (OSINT) analysis as part of every DevOps Penetration Test we perform. This analysis identifies misconfigured cloud storage databases such as AWS S3 buckets, Elasticsearch, MongoDB instances, and others. If you haven’t been paying attention to the news, misconfigured cloud storage databases are some of the largest sources of data leaks and breaches we see today*. In addition, we also leverage OSINT techniques to find vulnerabilities in the infrastructure that may leave your organization and applications exposed.
As part of this process, testers will also look into the activities of the developers themselves. Our testing checks to see if developers are practicing proper security measures. For example, we will analyze GitHub repositories looking for exposed credentials, locating sensitive data related to app development, and seeing what’s being discussed about an organization’s applications within popular public developer forums like Stack Overflow.
DevOps and Security Compliance
Security compliance does not magically go away when organizations “shift left.” That’s why Veracode DevOps Penetration Testing can be used to meet compliance requirements for PCI DSS 11.3 as well as GDPR Article 32 in the European Union. This requirement is also important for those organizations that need to comply with GDPR outside of the EU. GDPR Article 32 covers “Security of processing,” which requires that the data controller and processor implement appropriate technical and organizational measures to ensure a level of security appropriate to the risk. This includes “a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing” **. Penetration testing can help meet this new compliance requirement.
Veracode Is a Complete DevOps Testing Solution
Veracode DevOps Penetration Testing combined with Veracode’s static, dynamic, SCA, and application penetration testing provides the most comprehensive testing available for a DevOps environment in the market today. Contact your Veracode Sales or Services representative for more details on how to get started with your first Veracode DevOps Penetration Testing engagement.
Learn more about Veracode DevOps Penetration Testing here.
Security teams have historically been challenged by the choice of separate next-gen endpoint security technologies or a more integrated solution with a unified management console that can automate key capabilities. At this point it’s not really a choice at all – the threat landscape requires you to have both. The best layered and integrated defenses now include a broad portfolio of advanced prevention technologies, endpoint security controls, and advanced detection/response tools – all within an integrated system that goes beyond alerts and into insights that even a junior analyst can act on.
More Endpoints = More Vulnerabilities
Endpoints are long beyond on-premises servers, PCs, and traditional operating systems. Internet of things devices such as printers, scanners, point-of-sale handhelds, and even wearables are vulnerable and can provide entry points for organized attacks seeking access to corporate networks. Mobile devices—both BYOD and corporate issued—are among the easiest targets for app-based attacks. Per the 2019 McAfee Mobile Threat Report, the number one threat category was hidden apps, which accounted for almost one-third of all mobile attacks.
Many enterprises are unaware of their target-rich endpoint environments, resulting in security teams struggling to maintain complete vigilance. A 2018 SANS Survey on Endpoint Protection and Response revealed some sobering statistics:
- 42% of respondents report having had their endpoints exploited
- 84% of endpoint breaches include more than one endpoint
- 20% didn’t know whether they’d been breached
Endpoint attacks are designed to exploit the hapless user, including web drive-by, social engineering/phishing, and ransomware. Because these attacks rely on human actions, there’s a need for increased monitoring and containment, along with user education.
The latest attacks have the ability to move laterally across your entire environment, challenging every endpoint until a vulnerability is found. Once inside your walls, all endpoints become vulnerable. Modern endpoint security must extend protection across the entire digital terrain with visibility to spot all potential risks.
Less Consoles = Better Efficiency
A 2018 MSA Research report on security management commissioned by McAfee revealed that 55% of organizations struggle to rationalize data when three or more consoles are present. Too many security products, devices, and separate consoles call for a large budget and additional employees who might struggle to maintain a secure environment.
In contrast, single management consoles can efficiently coordinate the defenses built into modern devices while extending their overall posture with advanced capabilities—leaving nothing exposed. With everchanging industry requirements, an integrated endpoint security approach ensures that basic standards and processes are included and up to date.
Why McAfee Endpoint Security
McAfee offers a broad portfolio of security solutions that combine established capabilities (firewall, reputation, and heuristics) with cutting-edge machine learning and containment, along with endpoint detection and response (EDR) into a single-agent all-inclusive management console.
Is it time you took a fresh look at your strategy? Learn more in this white paper: Five ways to rethink your endpoint protection strategy.
The post How to Get the Best Layered and Integrated Endpoint Protection appeared first on McAfee Blogs.
In my last blog, I explained that while AI possesses the mechanics of humanness, we need to train the technology to make the leap from mimicking humanness with logic, rational and analytics to emulating humanness with common sense. If we evolve AI to make this leap the impact will be monumental, but it will require our global community to take a more disciplined approach to pervasive AI proliferation. Historically, our enthusiasm for and consumption of new technology has outpaced society’s ability to evolve legal, political, social, and ethical norms.
I spend most of my time thinking about AI in the context of how it will change the way we live. How it will change the way we interact, impact our social systems, and influence our morality. These technologies will permeate society and the ubiquity of their usage in the future will have far reaching implications. We are already seeing evidence of how it changes how we live and interact with the world around us.
Think Google. It excites our curiosity and puts information at our fingertips. What is tripe – should I order it off the menu? Why do some frogs squirt blood from their eyes? What does exculpatory mean?
AI is weaving the digital world into the fabric of our lives and making information instantaneously available with our fingertips.
AI-enabled technology is also capable of anticipating our needs. Think Alexa. As a security professional I am a hold out on this technology but the allure of it is indisputable. It makes the digital world accessible with a voice command. It understands more than we may want it to – Did someone tell Alexa to order coffee pods and toilet tissue and if not – how did Alexa know to order toilet tissue? Maybe somethings I just don’t want to know.
I also find it a bit creepy when my phone assumes (and gets it right) that I am going straight home from the grocery store letting me know, unsolicited, that it will take 28 minutes with traffic. How does it know I am going home? I could be going to the gym. It’s annoying that it knows I have no intention of working out. A human would at least have the decency to give me the travel time to both, allowing me to maintain the illusion that the gym was an equal possibility.
On a more serious note, AI-enabled technology will also impact our social, political and legal systems. As we incorporate it into more products and systems, issues related to privacy, morality and ethics will need to be addressed.
These questions are being asked now, but in anticipation of AI becoming embedded in everything we interact with it is critical that we begin to evolve our societal structures to address both the opportunities and the threats that will come with it.
The opportunities associated with AI are exciting. AI shows incredible promise in the medical world. It is already being used in some areas. There are already tools in use that leverage machine learning to help doctors identify disease related patterns in imaging. Research is under way using AI to help deal with cancer.
For example, in May 2018, The Guardian reported that skin cancer research using a convolutional neural network (CNN – based on AI) detected skin cancer 95% of the time compared to human dermatologists who detected it 86.6% of the time. Additionally, facial recognition in concert with AI may someday be commonplace in diagnosing rare genetic disorders, that today, may take months or years to diagnose.
But what happens when the diagnosis made by a machine is wrong? Who is liable legally? Do AI-based medical devices also need malpractice insurance?
The same types of questions arise with autonomous vehicles. Today it is always assumed a human is behind the wheel in control of the vehicle. Our laws are predicated on this assumption.
How must laws change to account for vehicles that do not have a human driver? Who is liable? How does our road system and infrastructure need to change?
The recent Uber accident case in Arizona determined that Uber was not liable for the death of a pedestrian killed by one of its autonomous vehicles. However, the safety driver who was watching TV rather than the road, may be charged with manslaughter. How does this change when the car’s occupants are no longer safety drivers but simply passengers in fully autonomous vehicles. How will laws need to evolve at that point for cars and other types of AI-based “active and unaided” technology?
There are also risks to be considered in adopting pervasive AI. Legal and political safeguards need to be considered, either in the form of global guidelines or laws. Machines do not have a moral compass. Given that the definition of morality may differ depending on where you live, it will be extremely difficult to train morality into AI models.
Today most AI models lack the ability to determine right from wrong, ill intent from good intent, morally acceptable outcomes from morally irreprehensible outcomes. AI does not understand if the person asking the questions, providing it data or giving it direction has malicious intent.
We may find ourselves on a moral precipice with AI. The safeguards or laws I mention above need to be considered before AI becomes more ubiquitous than it already is. AI will enable human kind to move forward in ways previously unimagined. It will also provide a powerful conduit through which humankind’s greatest shortcomings may be amplified.
The implications of technology that can profile entire segments of a population with little effort is disconcerting in a world where genocide has been a tragic reality, where civil obedience is coerced using social media, and where trust is undermined by those that use mis-information to sew political and societal discontent.
There is no doubt that AI will make this a better world. It gives us hope on so many fronts where technological impasses have impeded progress. Science may advance more rapidly, medical research progress beyond current roadblocks and daunting societal challenges around transportation and energy conservation may be solved. It is another tool in our technological arsenal and the odds are overwhelmingly in favor of it improving the global human condition.
But realizing its advantages while mitigating its risks will require commitment and hard work from many conscientious minds from different quarters of our society. We as the technology community have an obligation to engage key stakeholders across the legal, political, social and scientific community to ensure that as a society we define the moral guardrails for AI before it becomes capable of defining them, for or in spite of, us.
Like all technology before it, AI’s social impacts must be anticipated and balanced against the values we hold dear. Like parents raising a child, we need to establish and insist that the technology reflect our values now while its growth is still in its infancy.
The post Why AI Innovation Must Reflect Our Values in Its Infancy appeared first on McAfee Blogs.
Is your family feeling more vulnerable online lately? If so, you aren’t alone. The recent WhatsApp bug and social media breaches recently have app users thinking twice about security.
Hackers behind the recent WhatsApp malware attack, it’s reported, could record conversations, steal private messages, grab photos and location data, and turn on a device’s camera and microphone. (Is anyone else feeling like you just got caught in the middle an episode of Homeland?)
There’s not much you and your family can do about an attack like this except to stay on top of the news, be sure to share knowledge and react promptly, and discuss device security in your home as much as possible.
How much does your family love its apps? Here’s some insight:
- Facebook Messenger 3.408 billion downloads
- WhatsApp 2.979 billion downloads
- Instagram 1.843 billion downloads
- Skype 1.039 billion downloads
- Twitter 833.858 million downloads
- Candy Crush 805.826 million downloads
- Snapchat 782.837 million downloads
So, should you require your family to delete its favorite apps? Not even. A certain degree of vulnerability comes with the territory of a digital culture.
However, what you can and should do to ease that sense of vulnerability is to adopt proactive safety habits — and teach your kids — to layer up safeguards wherever possible.
Tips to Help Your Family Avoid Being Hacked
Don’t be complacent. Talk to your kids about digital responsibility and to treat each app like a potential doorway that could expose your family’s data. Take the time to sit down and teach kids how to lock down privacy settings and the importance of keeping device software updated. Counsel them not to accept data breaches as a regular part of digital life and how to fight back against online criminals with a security mindset.
Power up your passwords. Teach your kids to use unique, complex passwords for all of their apps and to use multi-factor authentication when it’s offered.
Auto update all apps. App developers regularly issue updates to fix security vulnerabilities. You can turn on auto updates in your device’s Settings.
Add extra security. If you can add a robust, easy-to-install layer of security to protect your family’s devices, why not? McAfee mobile solutions are available for both iOS and Android and will help safeguard devices from cyber threats.
Avoid suspicious links. Hackers send malicious links through text, messenger, email, pop-ups, or within the context of an ongoing conversation. Teach your kids to be aware of these tricks and not to click suspicious links or download unfamiliar content.
Share responsibly. When you use chat apps like WhatsApp or Facebook Messenger, it’s easy to forget that an outsider can access your conversation. Remind your children that nothing is private — even messaging apps that feel as if a conversation is private. Hackers are looking for personal information (birthday, address, hometown, or names of family members and pets) to crack your passwords, steal your identity, or gain access to other accounts.
What to Do If You Get Hacked
If one of your apps is compromised, act quickly to minimize the fallout. If you’ve been hacked, you may notice your device running slowly, a drain on your data, strange apps on your home screen, and evidence of calls, texts or emails you did not send.
Social media accounts. For Facebook and other social accounts, change your password immediately and alert your contacts that your account was compromised.
Review your purchase history. Check to see if there are any new apps or games installed that you didn’t authorize. You may have to cancel the credit card associated with your Google Play or iTunes account.
Revoke app access, delete old apps. Sometimes it’s not a person but a malicious app you may have downloaded that is wreaking havoc on your device. Encourage your kids to go through their apps and delete suspicious ones as well as apps they don’t use.
Bugs and breaches are part of our digital culture, but we don’t have to resign ourselves to being targets. By sharing knowledge and teaching kids to put on a security mindset, together, you can stay one step ahead of a cybercrook’s digital traps.
The post Breaches and Bugs: How Secure are Your Family’s Favorite Apps? appeared first on McAfee Blogs.
A new WhatsApp vulnerability has attracted the attention of the press and security professionals around the world. We wanted to provide some information and a quick summary.
This post will cover vulnerability analysis and how McAfee MVISION Mobile can help.
On May 13th, Facebook announced a vulnerability associated with all of its WhatsApp products. This vulnerability was reportedly exploited in the wild, and it was designated as CVE-2019-3568.
WhatsApp told the BBC its security team was the first to identify the flaw. It shared that information with human rights groups, selected security vendors and the US Department of Justice earlier this month.
The CVE-2019-3568 Vulnerability Explained
WhatsApp suffers from a buffer overflow weakness, meaning an attacker can leverage it to run malicious code on the device. Data packets can be manipulated during the start of a voice call, leading to the overflow being triggered and the attacker commandeering the application. Attackers can then deploy surveillance tools to the device to use against the target.
A buffer overflow vulnerability in WhatsApp VOIP (voice over internet protocol) stack allows remote code execution via a specially-crafted series of SRTP (secure real-time transport protocol) packets sent to a target phone number.
- WhatsApp for Android prior to v2.19.134
- WhatsApp Business for Android prior to v2.19.44
- WhatsApp for iOS prior to v2.19.51
- WhatsApp Business for iOS prior to v2.19.51
- WhatsApp for Windows Phone prior to v2.18.348
- WhatsApp for Tizen prior to v2.18.15.
The Alleged Exploit
An exploit of the vulnerability was used in an attempted attack on the phone of a UK-based attorney on 12 May, the Financial Times reported. The reported attack involved using WhatsApp’s voice calling function to ring a target’s device. Even if the call was not picked up, the surveillance software could be installed.
How MVISION Mobile can combat CVE-2019-3568 Attacks
To date, the detection technology inside MVISION Mobile has detected 100 percent of zero-day device exploits without requiring an update.
MVISION Mobile helps protect customers by identifying at-risk iOS and Android devices and active threats trying to leverage the vulnerability. It leverages Advanced App Analysis capabilities to help administrators find all devices that are exposed to the WhatsApp vulnerability by identifying all devices that have the vulnerable versions of WhatsApp on them and establish custom policies to address the risk. If the exploit attempts to elevate privileges and compromise the device, MVISION Mobile would detect the attack on the device.
The post How MVISION Mobile can combat the WhatsApp Buffer Overflow Vulnerability appeared first on McAfee Blogs.
Every day, we protect users from hundreds of thousands of account hijacking attempts. Most attacks stem from automated bots with access to third-party password breaches, but we also see phishing and targeted attacks. Earlier this year, we suggested how just five simple steps like adding a recovery phone number can help keep you safe, but we wanted to prove it in practice.
We teamed up with researchers from New York University and the University of California, San Diego to find out just how effective basic account hygiene is at preventing hijacking. The year-long study, on wide-scale attacks and targeted attacks, was presented on Wednesday at a gathering of experts, policy makers, and users called The Web Conference.
Our research shows that simply adding a recovery phone number to your Google Account can block up to 100% of automated bots, 99% of bulk phishing attacks, and 66% of targeted attacks that occurred during our investigation.
Google’s automatic, proactive hijacking protection
We provide an automatic, proactive layer of security to better protect all our users against account hijacking. Here’s how it works: if we detect a suspicious sign-in attempt (say, from a new location or device), we’ll ask for additional proof that it’s really you. This proof might be confirming you have access to a trusted phone or answering a question where only you know the correct response.
If you’ve signed into your phone or set up a recovery phone number, we can provide a similar level of protection to 2-Step Verification via device-based challenges. We found that an SMS code sent to a recovery phone number helped block 100% of automated bots, 96% of bulk phishing attacks, and 76% of targeted attacks. On-device prompts, a more secure replacement for SMS, helped prevent 100% of automated bots, 99% of bulk phishing attacks and 90% of targeted attacks.
Given the security benefits of challenges, one might ask why we don’t require them for all sign-ins. The answer is that challenges introduce additional friction and increase the risk of account lockout. In an experiment, 38% of users did not have access to their phone when challenged. Another 34% of users could not recall their secondary email address.
If you lose access to your phone, or can’t solve a challenge, you can always return to a trusted device you previously logged in from to gain access to your account.
Digging into “hack for hire” attacks
Where most bots and phishing attacks are blocked by our automatic protections, targeted attacks are more pernicious. As part of our ongoing efforts to monitor hijacking threats, we have been investigating emerging “hack for hire” criminal groups that purport to break into a single account for a fee on the order of $750 USD. These attackers often rely on spear phishing emails that impersonate family members, colleagues, government officials, or even Google. If the target doesn’t fall for the first spear phishing attempt, follow-on attacks persist for upwards of a month.
Take a moment to help keep your account secure
Just like buckling a seat belt, take a moment to follow our five tips to help keep your account secure. As our research shows, one of the easiest things you can do to protect your Google Account is to set up a recovery phone number. For high-risk users—like journalists, activists, business leaders, and political campaign teams—our Advanced Protection Program provides the highest level of security. You can also help protect your non-Google accounts from third-party password breaches by installing the Password Checkup Chrome extension.
If you have a moment take a look at our 1 minute videos to get caught up on the latest things going on in the privacy community. California Consumer Protection Act – Ben Siegel discusses the California Consumer Protection Act and how some of the advancing Amendments can drastically change the CCPA. Privacy Awareness Ideas […]
The WhatsApp security flaw by far received the most the attention of the media and was very much the leading frontpage news story for a day. The WhatsApp vulnerability (CVE-2019-3568) impacts both iPhone and Android versions of the mobile messaging app, allowing an attacker to install surveillance software, namely, spyware called Pegasus, which access can the smartphone's call logs, text messages, and can covertly enable and record the camera and microphone.
From a technical perspective, the vulnerability (CVE-2019-3568) can be exploited with a buffer overflow attack against WhatsApp's VOIP stack, this makes remote code execution possible by sending specially crafted SRTCP packets to the phone, a sophisticated exploit.
Should you be concerned?
WhatsApp said it believed only a "select number of users were targeted through this vulnerability by an advanced cyber actor." According to the FT, that threat actor was an Israeli company called ‘NSO Group’. NSO developed the exploit to sell on, NSO advertises it sells products to government agencies "for fighting terrorism and aiding law enforcement investigations". NSO products (aka "spyware") is known to be used by government agencies in UAE, Saudi Arabia and Mexico.
So, if you are one of the 1.5 billion WhatsApp users, not a middle-east political activist or a Mexican criminal, you probably shouldn’t too worry about your smartphone being exploited in the past. If you were exploited, there would be signs, with unusual cliches and activity on your phone. Despite the low risk at present, all WhatsApp users should quickly update their WhatsApp app before criminals attempt to ‘copycat’ NSO Group exploitation.
How to Prevent
Update the WhatsApp app.
- Open the Apple AppStore App
- Search for WhatsApp Messenger
- Tap 'Update' and the latest version of WhatsApp will be installed
- App Version 2.19.51 and above fixes the vulnerability
- Open Google Play Store
- Tap the menu in the top left corner
- Go to “My Apps & Games”
- Tap ‘Update’ next to WhatsApp Messenger and the latest version of WhatsApp will be installed
- App Version 2.19.134 and above fixes the vulnerability
How to Prevent
Apply the latest Microsoft Windows Update. Microsoft has said anti-virus products will not provide any protection against the exploitation of this vulnerability, therefore applying the Microsoft May 2019 Security Update, as released on Tuesday 14th May 2019, is the only way to be certain of protecting against the exploitation of this critical vulnerability
Ensure automatic updates is always kept switched on. Windows by default should attempt to download and install the latest security updates, typically you will be prompted to apply the update and accept a reboot, do this without delay.
To double check, select the Start menu, followed by the gear cog icon on the left. Then, select Update & Security and Windows Update.
- New Meltdown: Researchers discover New Hardware Vulnerability in Modern Intel Processors
- Vulnerability CVEs
Managing a security program in today’s ever-changing cyber threat landscape is no small feat. Many administrators struggle with knowing where to even start. Cybersecurity programs must be continually evaluated and should evolve as cyber threats and company risks change; however, these steps will guide you in the right direction to begin strengthening your security program today.
1. Assess your current security program.
The best way to assess a security program is to first choose a framework best for your company. A good framework to follow is the NIST Cybersecurity Framework, which is a comprehensive guide to baseline security requirements and controls any company can implement to strengthen a security program. For companies of all sizes, implementing a security control or practice must be evaluated from a business standpoint to determine if the benefit to the business outweighs the cost of the security control. Following a framework for this evaluation will help you prioritize cybersecurity initiatives and give your organization a clear roadmap for the way you want to develop a cybersecurity program.
2. Identify what data you have and where it lives.
Data cannot be protected if the custodians don’t know it exists, or where it exists. Identification of the data stored, created, or controlled by a company is crucial to understanding your cybersecurity and data protection priorities. Further, identifying whether sensitive data is stored in cloud services, on hard drives, or in file servers can drastically change the strategy needed in order to protect that data. Even Data Loss Prevention (DLP) tools are less effective if the tool is not focused on the right locations to determine whether data is being accessed or is leaving the protected network in some way. Identifying data locations can also help you to ensure your proprietary or confidential data is moved from less secure locations, such as private cloud storage accounts, to secure, company-controlled environments like an enterprise cloud account.
3. Implement and enforce policies to combat insider threat.
Policies and procedure are essential to combat the human element of cybersecurity. Employees often do not understand what they can and cannot do with a company’s documents, hardware, and system access if there are no policies in place to guide them. An insider threat isn’t necessarily a nefarious actor out to steal company data; it often presents itself in examples such as a well-meaning employee who shares a document with a partner in an insecure way – exposing the data to unauthorized access.
4. Implement a security awareness training program.
Continuing with the theme of well-meaning employees, phishing attacks are the cause of data breaches in 98% of the cases reported (Verizon DBIR). Anti-phishing measures can only go so far to detect phishing attacks, so it’s up to the employee to know how to recognize a phishing email, and to know what to do with it. Security awareness training can teach an employee to recognize the signs of phishing emails and may prevent the employees and the company from falling victim to a phishing attack.
5. Talk to your IT team for multi-factor authentication and anti-phishing measures.
Multi-factor authentication (MFA) is one of the best security controls you can implement to prevent unauthorized access to company systems. Simply put, MFA works by adding not only something the user knows (i.e. a password) but also something the user has (i.e. a texted code to a cell phone, or better yet, a hardware key an employee has to interact with) to access a system. Many instances of unauthorized system access could have been thwarted by a company’s use of MFA on their critical systems. In addition, as mentioned above, phishing attacks are responsible for a large majority of data breaches and anti-phishing measures should be taken to protect corporate email systems.
6. Implement a third party vendor risk management program.
Many companies work with third-party vendors and service providers and in some cases, these providers need access into corporate infrastructure and IT systems. You can invest millions or even billions into your cybersecurity program, but it can be for nothing if a trusted service provider becomes compromised. As is the case in many high-profile breaches, it was the service provider who suffered the breach, in turn causing their partners to suffer the same fate. Implement a third-party risk management program in which new and existing service providers must show proof of their internal security program practices and controls, before allowing them access into a corporate system.
7. Implement onboarding and offboarding policies that integrate HR and IT.
When onboarding a new employee, a policy needs to be in place that allows for your HR and IT departments to work together to determine what information the new hire needs access to in order to do their job. Equally important, you must also have a policy in place for offboarding. Without proper offboarding policies, former employees or contractors may still be able to access certain IT systems well after the they’ve left the organization. Cases where former contractors or employees retained access to a company’s IT systems for months or even years after that access should have been revoked are not uncommon. And in many cases, an employee leaves a company involuntarily, and decides to use their company access to destroy documents, steal company intellectual property, and can be as destructive as deleting entire servers and infrastructure. Access to systems should be approved by HR (to prevent extra accounts and backdoors from being created without company knowledge), and departed employees should be immediately deprovisioned from all systems.
Implementing any cybersecurity controls or program initiatives requires a company culture shift and executive buy-in. However, organizations, no matter the size, simply cannot afford to ignore security, nor can they wait for a breach to occur before security is taken seriously. The steps outlined in this post will be an excellent start to a strong security program and will help you gain traction for future program changes and improvements.
Download the Checklist to Share.
The post 7 Steps to Strengthen Your Cybersecurity Program Today appeared first on GRA Quantum.
It’s best practice to kick off your AppSec inititive by starting small, scanning your most business-critical apps, and addressing the most severe flaws. But it’s also best practice to scale your program to eventually cover your entire app landscape, and all flaws. Why? First, because you can be breached through non-critical apps; JP Morgan was breached through third-party software supporting its charitable road race, and Target was breached through its HVAC vendor’s software. Second, you can be breached through a low-severity vulnerability. Oftentimes, a low-severity flaw could be just as risky, if not more so, than a higher-severity flaw. For example, a low-severity information leakage flaw could provide just the right amount of system knowledge an attacker needs to leverage a vulnerability that might otherwise be difficult to exploit.
How do you make this transition from few to many, especially with limited security staff and expertise? This is a significant challenge. In fact, we typically see AppSec programs fail for two reasons: Lack of experience in running an application security program, and the inability to hire enough qualified staff to run application security tools at scale. Very few application security managers have run large programs before and have the experience to predict ramp up and adoption. The global shortage of security professionals also makes it difficult to hire enough people to coordinate between development and security teams. The 2018 Cyberthreat Defense Report found that a rising shortage of skilled personnel is the number one inhibitor organizations face when trying to establish a security program.
Yet, we’ve also helped thousands of customers grow and mature their AppSec programs over the past 12 years, and we know there are a few keys to effectively scaling an application security program. These keys include:
The right partner
Considering the skills shortage, engaging outside AppSec expertise goes a long way, both to establish your program’s goals and roadmap and keep it on track, and to guide you through fixing the flaws you find. We aren’t suggesting you replace your security team with consultants, but rather that you complement it with specialized AppSec expertise and free your team to focus on managing risk by taking these tasks of their plates:
Addressing the blocking and tackling of onboarding
- Application security program management
- Identifying and addressing barriers to success
- Work with development teams to ensure they are finding and remediating vulnerabilities
We’ve seen the difference this support makes: Veracode customers who work with our security program managers grow their application coverage by 25 percent each year, decrease their time to deployment, and demonstrate better vulnerability detection and remediation metrics.
In fact, data collected for our State of Software Security report found that developers who get remediation coaching from our security experts fix 88 percent more flaws.
Another way to scale your AppSec program is to develop and nurture security champions within your development teams. While these developers aren’t (and don’t have to be) security pros, they can act as the security conscience of the team by keeping their eyes and ears open for potential issues. The team can then fix the issues in development or call in your organization’s security experts for guidance. An embedded security champion can effectively help an organization make up for a lack of security coverage or skills by acting as a force multiplier who can pass on security best practices, answer questions, and raise security awareness. Because your security champion speaks the lingo of developers and is intimately involved in your organization’s development projects, he or she can communicate security issues in a way that development teams will understand and embrace.
How can you start developing security champions?
- Get leadership buy-in. Make sure management, the security team, and the Scrum leaders are willing to invest the time, money, and resources it will take to make security champions effective.
- Set the standard. Create expectations for what security champions should do and incorporate it into their pre-existing peer review work to minimize disruptions.
- Track success. Make security a KPI so your organization can evaluate the ROI of the program
- Provide training. Volunteers can bring passion, but it’s up to your security experts to provide the knowledge your security champions will need to review code for flaws and pass best practices on to the development team.
- Build community. Make sure security champions have ample opportunity to meet with each other and the security team to discuss specific issues and overall trends.
In addition, a cloud-based application security solution can help you scale your program without a lot of extra cost or hassle compared to an on-premises solution. When an on-premises application security program needs to be scaled, enterprises frequently need to track down more of hard-to-find security specialists, in addition to installing more servers.
Things that usually cost extra in an on-premises solution — features such as integrations, onboarding, upgrades, and maintenance — are all included with a cloud-based solution. This allows your security team to focus on scaling your AppSec efforts without worrying about going over budget.
Application security is about more than scanning; the ability to scale your program is a critical factor that can make or break your program. Learn more about AppSec best practices in our new eBook, Application Security: Beyond Scanning.
It is often that a data breach reveals other issues that a business is experiencing, but it isn’t every day I see the opposite. When I heard about what was happening at Bethesda Softworks and their online game, I was interested immediately. The background on this is simple enough. Bethesda is a well-known video game […]
Messaging apps are a common form of digital communication these days, with Facebook’s WhatsApp being one of the most popular options out there. The communication platform boasts over 1.5 billion users – who now need to immediately update the app due to a new security threat. In fact, WhatsApp just announced a recently discovered security vulnerability that exposes both iOS and Android devices to malicious spyware.
So, how does this cyberthreat work, exactly? Leveraging the new WhatsApp bug, hackers first begin the scheme by calling an innocent user via the app. Regardless of whether the user picks up or not, the attacker can use that phone call to infect the device with malicious spyware. From there, crooks can potentially snoop around the user’s device, likely without the victim’s knowledge.
Fortunately, WhatsApp has already issued a patch that solves for the problem – which means users will fix the bug if they update their app immediately. But that doesn’t mean users shouldn’t still keep security top of mind now and in the future when it comes to messaging apps and the crucial data they contain. With that said, here are a few security steps to follow:
- Flip on automatic updates. No matter the type of application or platform, it’s always crucial to keep your software up-to-date, as fixes for vulnerabilities are usually included in each new version. Turning on automatic updates will ensure that you are always equipped with the latest security patches.
- Be selective about what information you share. When chatting with fellow users on WhatsApp and other messaging platforms, it’s important you’re always careful of sharing personal data. Never exchange financial information or crucial personal details over the app, as they can possibly be stolen in the chance your device does become compromised with spyware or other malware.
- Protect your mobile phones from spyware. To help prevent your device from becoming compromised by malicious software, such as this WhatsApp spyware, be sure to add an extra layer of security to it by leveraging a mobile security solution. With McAfee Mobile Security being available for both iOS and Android, devices of all types will remain protected from cyberthreats.
The post 3 Tips for Protecting Against the New WhatsApp Bug appeared first on McAfee Blogs.
We’ve become aware of an issue that affects the Bluetooth Low Energy (BLE) version of the Titan Security Key available in the U.S. and are providing users with the immediate steps they need to take to protect themselves and to receive a free replacement key. This bug affects Bluetooth pairing only, so non-Bluetooth security keys are not affected. Current users of Bluetooth Titan Security Keys should continue to use their existing keys while waiting for a replacement, since security keys provide the strongest protection against phishing.
What is the security issue?
Due to a misconfiguration in the Titan Security Keys’ Bluetooth pairing protocols, it is possible for an attacker who is physically close to you at the moment you use your security key -- within approximately 30 feet -- to (a) communicate with your security key, or (b) communicate with the device to which your key is paired. In order for the misconfiguration to be exploited, an attacker would have to align a series of events in close coordination:
- When you’re trying to sign into an account on your device, you are normally asked to press the button on your BLE security key to activate it. An attacker in close physical proximity at that moment in time can potentially connect their own device to your affected security key before your own device connects. In this set of circumstances, the attacker could sign into your account using their own device if the attacker somehow already obtained your username and password and could time these events exactly.
- Before you can use your security key, it must be paired to your device. Once paired, an attacker in close physical proximity to you could use their device to masquerade as your affected security key and connect to your device at the moment you are asked to press the button on your key. After that, they could attempt to change their device to appear as a Bluetooth keyboard or mouse and potentially take actions on your device.
This security issue does not affect the primary purpose of security keys, which is to protect you against phishing by a remote attacker. Security keys remain the strongest available protection against phishing; it is still safer to use a key that has this issue, rather than turning off security key-based two-step verification (2SV) on your Google Account or downgrading to less phishing-resistant methods (e.g. SMS codes or prompts sent to your device). This local proximity Bluetooth issue does not affect USB or NFC security keys.
Am I affected?
This issue affects the BLE version of Titan Security Keys. To determine if your key is affected, check the back of the key. If it has a “T1” or “T2” on the back of the key, your key is affected by the issue and is eligible for free replacement.
Steps to protect yourself
If you want to minimize the remaining risk until you receive your replacement keys, you can perform the following additional steps:
On devices running iOS version 12.2 or earlier, we recommend using your affected security key in a private place where a potential attacker is not within close physical proximity (approximately 30 feet). After you’ve used your key to sign into your Google Account on your device, immediately unpair it. You can use your key in this manner again while waiting for your replacement, until you update to iOS 12.3.
Once you update to iOS 12.3, your affected security key will no longer work. You will not be able to use your affected key to sign into your Google Account, or any other account protected by the key, and you will need to order a replacement key. If you are already signed into your Google Account on your iOS device, do not sign out because you won’t be able to sign in again until you get a new key. If you are locked out of your Google Account on your iOS device before your replacement key arrives, see these instructions for getting back into your account. Note that you can continue to sign into your Google Account on non-iOS devices.
On Android and other devices:
We recommend using your affected security key in a private place where a potential attacker is not within close physical proximity (approximately 30 feet). After you’ve used your affected security key to sign into your Google Account, immediately unpair it. Android devices updated with the upcoming June 2019 Security Patch Level (SPL) and beyond will automatically unpair affected Bluetooth devices, so you won’t need to unpair manually. You can also continue to use your USB or NFC security keys, which are supported on Android and not affected by this issue.
How to get a replacement key
We recommend that everyone with an affected BLE Titan Security Key get a free replacement by visiting google.com/replacemykey.
Is it still safe to use my affected BLE Titan Security Key?
It is much safer to use the affected key instead of no key at all. Security keys are the strongest protection against phishing currently available.
Cloud management is a critical topic that organizations are looking at to simplify operations, increase IT efficiency, and reduce costs. Although cloud adoption has risen in the past few years, some organizations aren’t seeing the results they’d envisioned. That’s why we’re sharing a few of the top cloud management challenges enterprises need to be cautious of and how to overcome them.
Cloud Management Challenge #1: Security
Given the overall trend toward migrating resources to the cloud, a rise in security threats shouldn’t be surprising. Per our latest Cloud Risk and Adoption Report, the average enterprise organization experiences 31.3 cloud related security threats each month—a 27.7% increase over the same period last year. Broken down by category, these include insider threats (both accidental and malicious), privileged user threats, and threats arising from potentially compromised accounts.
To mitigate these types of cloud threats and risks, we have a few recommendations to better protect your business. Start with auditing your Amazon Web Services, Microsoft Azure, Google Cloud Platform, or other IaaS/PaaS configurations to get ahead of misconfigurations before they open a hole in the integrity of your security posture. Second, it’s important to understand which cloud services hold most of your sensitive data. Once that’s determined, extend data loss prevention (DLP) policies to those services, or build them in the cloud if you don’t already have a DLP practice. Right along with controlling the data itself goes controlling who the data can go to, so lock down sharing where your sensitive data lives.
Cloud Management Challenge #2: Governance
Many companies deploy cloud systems without an adequate governance plan, which increases the risk of security breaches and inefficiency. Lack of data governance may result in a serious financial loss, and failing to protect sensitive data could result in a data breach.
Cloud management and cloud governance are often interlinked. Keeping track of your cloud infrastructure is essential. Governance and infrastructure planning can help mitigate certain infrastructure risks, therefore, automated cloud discovery and governance tools will help your business safeguard operations.
Cloud Management Challenge #3: Proficiency
You may also be faced with the challenge of ensuring that IT employees have the proper expertise to manage their services in a cloud environment. You may need to decide to either hire a new team that is already familiar with cloud environments or train your existing staff.
In the end, training your existing staff is less expensive, scalable, and faster. Knowledge is key when transforming your business and shifting your operational model to the cloud. Accept the challenge and train your employees, give them hands-on time, and get them properly certified. For security professionals, the Cloud Security Alliance is a great place to start for training programs.
Cloud Management Challenge #4: Performance
Enterprises are continually looking for ways to improve their application performance, and internal/external SLAs. However, even in the cloud, they may not immediately achieve these benefits. Cloud performance is complex and if you’re having performance issues it’s important to look at a variety of issues that could be occurring in your environment.
How should you approach finding and fixing the root causes of cloud performance issues? Check your infrastructure and the applications themselves. Examine the applications you ported over from on-premises data centers, and evaluate whether newer, cloud technologies such as containers or serverless computing could replace some of your application components and improve performance. Also, evaluate multiple cloud providers for your application or infrastructure needs, as each have their own offerings and geographic distribution.
Cloud Management Challenge #5: Cost
Managing cloud costs can be a challenge, but in general, migrating to the cloud offers companies enormous savings. We see organizations investing more dollars in the cloud to bring greater flexibility to their enterprise, allowing them to quickly and efficiently react to the changing market conditions. Organizations are moving more of their services to the cloud, which is resulting in higher spend with cloud service providers.
Shifting IT cost from on-premises to the cloud on its own is not the challenge – it is the unmonitored sprawl of cloud resources that typically spikes cost for organizations. Managing your cloud costs can be simple if you effectively monitor use. With visibility into unsanctioned, “Shadow” cloud use, your organization can find the areas where there is unnecessary waste of resources. By auditing your cloud usage, you may even determine new ways to manage cost, such as re-architecting your workloads using a PaaS architecture, which may be more cost-effective.
Migrating to the cloud is a challenge but can bring a wide range of benefits to your organization with a reduction in costs, unlimited scalability, improved security, and overall a faster business model. These days, everyone is in the cloud but that doesn’t mean your business’s success should be hindered by the common challenges of cloud management.
For more on how to secure your cloud environment, check out McAfee MVISION Cloud, a cloud access security broker (CASB) that protects data where it lives with a solution that was built natively in the cloud, for the cloud.
The post Cloud 101: Navigating the Top 5 Cloud Management Challenges appeared first on McAfee Blogs.
Industry Security Patches
I’ll admit it. I am old enough that my younger adult days have not been recorded for all to access on the internet. Many of my generation – the X’ers – relish this lucky position when it comes to the intersection of life and the technological innovation time line. Not that the choice was mine, […]
On Monday, The Financial Times reported that attackers have been exploiting a buffer overflow vulnerability in the popular messaging service WhatsApp. The vulnerability has been fixed, and updates were released on Friday. WhatsApp, owned by Facebook, is urging both iPhone and Android users to update the app as soon as possible.
Veracode’s State of Software Security Volume 9 found that buffer overflow was the 25th most common vulnerability, found in 3 percent of applications. Although not as prevalent as some other flaw categories (like XSS or SQL injection), it is a highly exploitable flaw, and organizations should be aware of it and addressing it quickly. Yet our data also reveals that organizations are taking a troubling amount of time to fix buffer overflow flaws – it took organizations an average of 225 days to address 75 percent of these flaws.
According to theWhatsApp, the vulnerability (CVE-2019-3568) in the VOIP stack allows remote code execution. The RCE vulnerability on WhatsApp is exploited by sending malicious codes to targeted phone numbers. Attackers can exploit the vulnerability by using the WhatsApp calling function to call a user’s mobile phone and then install surveillance software on the device. According to The Financial Times, a user doesn’t need to answer the call to be infected, and the calls seem to disappear from logs.
NSO Group, part-owned by private equity firm Novalpina Capital, is an Israeli company that created Pegasus, the software that is believed to be an integral element for successfully pulling off the attacks. The BBC reports that NSO’s flagship software can gather personal data from a targeted device using the microphone and camera, as well as capturing location data.
WhatsApp has reported the vulnerability to its lead regulator in the Europe Union, Ireland’s Data Protection Commission (DPC), though it is still investigating whether or not any EU user data has been affected as a result of the incident. The company also reported the vulnerability to the US Department of Justice last week.
WhatsApp is one of the most popular messaging tools in the world, with a sizeable 1.5 billion monthly users. It’s favored for its high level of security and privacy, as messages are encrypted end-to-end. This news adds to a turbulent period at Facebook, which bought WhatsApp in 2014 for $19 billion. Last month, a security research firm revealed 540 million Facebook accounts were publicly exposed, and a co-founder, Chris Hughes, recently advocated in The New York Times that the company should be broken up for fear that it has too much influence and power.
Obviously, my initial thought it was a phishing email, decent quality and a well-timed attempt given Liverpool and Tottenham Hotspur were confirmed as finalists after very dramatic semi-final matches on the previous nights. I logged into my Zavvi account directly, then reset my password just in case, and after a bit checking with the embedded links within the email, and research on the Zavvi website, I soon established it was a genuine email from Zavvi.
So unless the Athletico Madrid stadium has undergone a huge capacity upgrade, it became obvious that someone at Zavvi had made a huge blunder, resulting in personalised competition winner emails to be sent on mass to thousands of Zavvi customers.
What compounded matters was Zavvi keeping relatively stum about the blunder throughout the day. The e-commerce entertainment retail store published an apology mid-morning on their Facebook page, but after 100s of comments by angry customers, they deleted the post a couple of hours later. It took them almost 8 hours before Zavvi finally followed up to the "Congratulations" email, by emailing an apology which offered a mere 15% discount off their website products. I suspect most Zavvi customer won't be too happy about that, especially those that went through the day believing they had won a once in a lifetime competition.
Posted by Rene Mayrhofer and Xiaowen Xin, Android Security & Privacy Team[Cross-posted from the Android Developers Blog]
With every new version of Android, one of our top priorities is raising the bar for security. Over the last few years, these improvements have led to measurable progress across the ecosystem, and 2018 was no different.
In the 4th quarter of 2018, we had 84% more devices receiving a security update than in the same quarter the prior year. At the same time, no critical security vulnerabilities affecting the Android platform were publicly disclosed without a security update or mitigation available in 2018, and we saw a 20% year-over-year decline in the proportion of devices that installed a Potentially Harmful App. In the spirit of transparency, we released this data and more in our Android Security & Privacy 2018 Year In Review.
But now you may be asking, what’s next?
Today at Google I/O we lifted the curtain on all the new security features being integrated into Android Q. We plan to go deeper on each feature in the coming weeks and months, but first wanted to share a quick summary of all the security goodness we’re adding to the platform.
Storage encryption is one of the most fundamental (and effective) security technologies, but current encryption standards require devices have cryptographic acceleration hardware. Because of this requirement many devices are not capable of using storage encryption. The launch of Adiantum changes that in the Android Q release. We announced Adiantum in February. Adiantum is designed to run efficiently without specialized hardware, and can work across everything from smart watches to internet-connected medical devices.
Our commitment to the importance of encryption continues with the Android Q release. All compatible Android devices newly launching with Android Q are required to encrypt user data, with no exceptions. This includes phones, tablets, televisions, and automotive devices. This will ensure the next generation of devices are more secure than their predecessors, and allow the next billion people coming online for the first time to do so safely.
However, storage encryption is just one half of the picture, which is why we are also enabling TLS 1.3 support by default in Android Q. TLS 1.3 is a major revision to the TLS standard finalized by the IETF in August 2018. It is faster, more secure, and more private. TLS 1.3 can often complete the handshake in fewer roundtrips, making the connection time up to 40% faster for those sessions. From a security perspective, TLS 1.3 removes support for weaker cryptographic algorithms, as well as some insecure or obsolete features. It uses a newly-designed handshake which fixes several weaknesses in TLS 1.2. The new protocol is cleaner, less error prone, and more resilient to key compromise. Finally, from a privacy perspective, TLS 1.3 encrypts more of the handshake to better protect the identities of the participating parties.
Android utilizes a strategy of defense-in-depth to ensure that individual implementation bugs are insufficient for bypassing our security systems. We apply process isolation, attack surface reduction, architectural decomposition, and exploit mitigations to render vulnerabilities more difficult or impossible to exploit, and to increase the number of vulnerabilities needed by an attacker to achieve their goals.
In Android Q, we have applied these strategies to security critical areas such as media, Bluetooth, and the kernel. We describe these improvements more extensively in a separate blog post, but some highlights include:
- A constrained sandbox for software codecs.
- Increased production use of sanitizers to mitigate entire classes of vulnerabilities in components that process untrusted content.
- Shadow Call Stack, which provides backward-edge Control Flow Integrity (CFI) and complements the forward-edge protection provided by LLVM’s CFI.
- Protecting Address Space Layout Randomization (ASLR) against leaks using eXecute-Only Memory (XOM).
- Introduction of Scudo hardened allocator which makes a number of heap related vulnerabilities more difficult to exploit.
Android Pie introduced the BiometricPrompt API to help apps utilize biometrics, including face, fingerprint, and iris. Since the launch, we’ve seen a lot of apps embrace the new API, and now with Android Q, we’ve updated the underlying framework with robust support for face and fingerprint. Additionally, we expanded the API to support additional use-cases, including both implicit and explicit authentication.
In the explicit flow, the user must perform an action to proceed, such as tap their finger to the fingerprint sensor. If they’re using face or iris to authenticate, then the user must click an additional button to proceed. The explicit flow is the default flow and should be used for all high-value transactions such as payments.
Implicit flow does not require an additional user action. It is used to provide a lighter-weight, more seamless experience for transactions that are readily and easily reversible, such as sign-in and autofill.
Another handy new feature in BiometricPrompt is the ability to check if a device supports biometric authentication prior to invoking BiometricPrompt. This is useful when the app wants to show an “enable biometric sign-in” or similar item in their sign-in page or in-app settings menu. To support this, we’ve added a new BiometricManager class. You can now call the canAuthenticate() method in it to determine whether the device supports biometric authentication and whether the user is enrolled.
Beyond Android Q, we are looking to add Electronic ID support for mobile apps, so that your phone can be used as an ID, such as a driver’s license. Apps such as these have a lot of security requirements and involves integration between the client application on the holder’s mobile phone, a reader/verifier device, and issuing authority backend systems used for license issuance, updates, and revocation.
This initiative requires expertise around cryptography and standardization from the ISO and is being led by the Android Security and Privacy team. We will be providing APIs and a reference implementation of HALs for Android devices in order to ensure the platform provides the building blocks for similar security and privacy sensitive applications. You can expect to hear more updates from us on Electronic ID support in the near future.Acknowledgements: This post leveraged contributions from Jeff Vander Stoep and Shawn Willden
Posted by Jeff Vander Stoep, Android Security & Privacy Team and Chong Zhang, Android Media Team[Cross-posted from the Android Developers Blog]
Android Q Beta versions are now publicly available. Among the various new features introduced in Android Q are some important security hardening changes. While exciting new security features are added in each Android release, hardening generally refers to security improvements made to existing components.
When prioritizing platform hardening, we analyze data from a number of sources including our vulnerability rewards program (VRP). Past security issues provide useful insight into which components can use additional hardening. Android publishes monthly security bulletins which include fixes for all the high/critical severity vulnerabilities in the Android Open Source Project (AOSP) reported through our VRP. While fixing vulnerabilities is necessary, we also get a lot of value from the metadata - analysis on the location and class of vulnerabilities. With this insight we can apply the following strategies to our existing components:
- Contain: isolating and de-privileging components, particularly ones that handle untrusted content. This includes:
- Access control: adding permission checks, increasing the granularity of permission checks, or switching to safer defaults (for example, default deny).
- Attack surface reduction: reducing the number of entry/exit points (i.e. principle of least privilege).
- Architectural decomposition: breaking privileged processes into less privileged components and applying attack surface reduction.
- Mitigate: Assume vulnerabilities exist and actively defend against classes of vulnerabilities or common exploitation techniques.
Here’s a look at high severity vulnerabilities by component and cause from 2018:
Most of Android’s vulnerabilities occur in the media and bluetooth components. Use-after-free (UAF), integer overflows, and out of bounds (OOB) reads/writes comprise 90% of vulnerabilities with OOB being the most common.
A Constrained Sandbox for Software Codecs
In Android Q, we moved software codecs out of the main mediacodec service into a constrained sandbox. This is a big step forward in our effort to improve security by isolating various media components into less privileged sandboxes. As Mark Brand of Project Zero points out in his Return To Libstagefright blog post, constrained sandboxes are not where an attacker wants to end up. In 2018, approximately 80% of the critical/high severity vulnerabilities in media components occurred in software codecs, meaning further isolating them is a big improvement. Due to the increased protection provided by the new mediaswcodec sandbox, these same vulnerabilities will receive a lower severity based on Android’s severity guidelines.
The following figure shows an overview of the evolution of media services layout in the recent Android releases.
- Prior to N, media services are all inside one monolithic mediaserver process, and the extractors run inside the client.
- In N, we delivered a major security re-architect, where a number of lower-level media services are spun off into individual service processes with reduced privilege sandboxes. Extractors are moved into server side, and put into a constrained sandbox. Only a couple of higher-level functionalities remained in mediaserver itself.
- In O, the services are “treblized,” and further deprivileged that is, separated into individual sandboxes and converted into HALs. The media.codec service became a HAL while still hosting both software and hardware codec implementations.
- In Q, the software codecs are extracted from the media.codec process, and moved back to system side. It becomes a system service that exposes the codec HAL interface. Selinux policy and seccomp filters are further tightened up for this process. In particular, while the previous mediacodec process had access to device drivers for hardware accelerated codecs, the software codec process has no access to device drivers.
With this move, we now have the two primary sources for media vulnerabilities tightly sandboxed within constrained processes. Software codecs are similar to extractors in that they both have extensive code parsing bitstreams from untrusted sources. Once a vulnerability is identified in the source code, it can be triggered by sending a crafted media file to media APIs (such as MediaExtractor or MediaCodec). Sandboxing these two services allows us to reduce the severity of potential security vulnerabilities without compromising performance.
In addition to constraining riskier codecs, a lot of work has also gone into preventing common types of vulnerabilities.
Incorrect or missing memory bounds checking on arrays account for about 34% of Android’s userspace vulnerabilities. In cases where the array size is known at compile time, LLVM’s bound sanitizer (BoundSan) can automatically instrument arrays to prevent overflows and fail safely.
BoundSan is enabled in 11 media codecs and throughout the Bluetooth stack for Android Q. By optimizing away a number of unnecessary checks the performance overhead was reduced to less than 1%. BoundSan has already found/prevented potential vulnerabilities in codecs and Bluetooth.
More integer sanitizer in more places
Android pioneered the production use of sanitizers in Android Nougat when we first started rolling out integer sanization (IntSan) in the media frameworks. This work has continued with each release and has been very successful in preventing otherwise exploitable vulnerabilities. For example, new IntSan coverage in Android Pie mitigated 11 critical vulnerabilities. Enabling IntSan is challenging because overflows are generally benign and unsigned integer overflows are well defined and sometimes intentional. This is quite different from the bound sanitizer where OOB reads/writes are always unintended and often exploitable. Enabling Intsan has been a multi year project, but with Q we have fully enabled it across the media frameworks with the inclusion of 11 more codecs.
IntSan works by instrumenting arithmetic operations to abort when an overflow occurs. This instrumentation can have an impact on performance, so evaluating the impact on CPU usage is necessary. In cases where performance impact was too high, we identified hot functions and individually disabled IntSan on those functions after manually reviewing them for integer safety.
BoundSan and IntSan are considered strong mitigations because (where applied) they prevent the root cause of memory safety vulnerabilities. The class of mitigations described next target common exploitation techniques. These mitigations are considered to be probabilistic because they make exploitation more difficult by limiting how a vulnerability may be used.
Shadow Call Stack
LLVM’s Control Flow Integrity (CFI) was enabled in the media frameworks, Bluetooth, and NFC in Android Pie. CFI makes code reuse attacks more difficult by protecting the forward-edges of the call graph, such as function pointers and virtual functions. Android Q uses LLVM’s Shadow Call Stack (SCS) to protect return addresses, protecting the backwards-edge of control flow graph. SCS accomplishes this by storing return addresses in a separate shadow stack which is protected from leakage by storing its location in the x18 register, which is now reserved by the compiler.
SCS has negligible performance overhead and a small memory increase due to the separate stack. In Android Q, SCS has been turned on in portions of the Bluetooth stack and is also available for the kernel. We’ll share more on that in an upcoming post.
Like SCS, eXecute-Only Memory (XOM) aims at making common exploitation techniques more expensive. It does so by strengthening the protections already provided by address space layout randomization (ASLR) which in turn makes code reuse attacks more difficult by requiring attackers to first leak the location of the code they intend to reuse. This often means that an attacker now needs two vulnerabilities, a read primitive and a write primitive, where previously just a write primitive was necessary in order to achieve their goals. XOM protects against leaks (memory disclosures of code segments) by making code unreadable. Attempts to read execute-only code results in the process aborting safely.
Tombstone from a XOM abort
Starting in Android Q, platform-provided AArch64 code segments in binaries and libraries are loaded as execute-only. Not all devices will immediately receive the benefit as this enforcement has hardware dependencies (ARMv8.2+) and kernel dependencies (Linux 4.9+, CONFIG_ARM64_UAO). For apps with a targetSdkVersion lower than Q, Android’s zygote process will relax the protection in order to avoid potential app breakage, but 64 bit system processes (for example, mediaextractor, init, vold, etc.) are protected. XOM protections are applied at compile-time and have no memory or CPU overhead.
Scudo Hardened Allocator
Scudo is a dynamic heap allocator designed to be resilient against heap related vulnerabilities such as:
- Use-after-frees: by quarantining freed blocks.
- Double-frees: by tracking chunk states.
- Buffer overflows: by check summing headers.
- Heap sprays and layout manipulation: by improved randomization.
Scudo does not prevent exploitation but rather proactively manages memory in a way to make exploitation more difficult. It is configurable on a per-process basis depending on performance requirements. Scudo is enabled in extractors and codecs in the media frameworks.
Tombstone from Scudo aborts
Contributing security improvements to Open Source
AOSP makes use of a number of Open Source Projects to build and secure Android. Google is actively contributing back to these projects in a number of security critical areas:
- Creator and primary contributor to AddressSanitizer and other "sanitizer" (compiler-based dynamic testing tools) to LLVM.
- Primary contributor to compiler-based hardening tools in LLVM (ControlFlowIntegrity, ShadowCallStack).
- Creator of fuzzing tools (AFL, libFuzzer, honggfuzz, syzkaller) and fuzzing infrastructure (oss-fuzz, syzbot) for user-space and OS kernels.
- Participant in research related to hardware-assisted memory safety.
- Primary contributor of the Scudo hardened allocator to LLVM.
Thank you to Ivan Lozano, Kevin Deus, Kostya Kortchinsky, Kostya Serebryany, and Mike Antares for their contributions to this post.
This is a normal frame with Ethernet II encapsulation. It begins with 6 bytes of the destination MAC address, 6 bytes of the source MAC address, and 2 bytes of an Ethertype, which in this case is 0x0800, indicating an IP packet follows the Ethernet header. There is no TCP payload as this is an ACK segment.
You can also see this in Tshark.
$ tshark -Vx -r frame4238.pcap
Frame 1: 66 bytes on wire (528 bits), 66 bytes captured (528 bits)
Encapsulation type: Ethernet (1)
Arrival Time: May 7, 2019 18:19:10.071831000 UTC
[Time shift for this packet: 0.000000000 seconds]
Epoch Time: 1557253150.071831000 seconds
[Time delta from previous captured frame: 0.000000000 seconds]
[Time delta from previous displayed frame: 0.000000000 seconds]
[Time since reference or first frame: 0.000000000 seconds]
Frame Number: 1
Frame Length: 66 bytes (528 bits)
Capture Length: 66 bytes (528 bits)
[Frame is marked: False]
[Frame is ignored: False]
[Protocols in frame: eth:ethertype:ip:tcp]
Ethernet II, Src: IntelCor_12:7d:bb (38:ba:f8:12:7d:bb), Dst: Ubiquiti_49:e0:10 (fc:ec:da:49:e0:10)
Destination: Ubiquiti_49:e0:10 (fc:ec:da:49:e0:10)
Address: Ubiquiti_49:e0:10 (fc:ec:da:49:e0:10)
.... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)
.... ...0 .... .... .... .... = IG bit: Individual address (unicast)
Source: IntelCor_12:7d:bb (38:ba:f8:12:7d:bb)
Address: IntelCor_12:7d:bb (38:ba:f8:12:7d:bb)
.... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)
.... ...0 .... .... .... .... = IG bit: Individual address (unicast)
Type: IPv4 (0x0800)
Internet Protocol Version 4, Src: 192.168.4.96, Dst: 184.108.40.206
0100 .... = Version: 4
.... 0101 = Header Length: 20 bytes (5)
Differentiated Services Field: 0x00 (DSCP: CS0, ECN: Not-ECT)
0000 00.. = Differentiated Services Codepoint: Default (0)
.... ..00 = Explicit Congestion Notification: Not ECN-Capable Transport (0)
Total Length: 52
Identification: 0xd98c (55692)
Flags: 0x4000, Don't fragment
0... .... .... .... = Reserved bit: Not set
.1.. .... .... .... = Don't fragment: Set
..0. .... .... .... = More fragments: Not set
...0 0000 0000 0000 = Fragment offset: 0
Time to live: 64
Protocol: TCP (6)
Header checksum: 0x553f [validation disabled]
[Header checksum status: Unverified]
Transmission Control Protocol, Src Port: 38828, Dst Port: 443, Seq: 1, Ack: 1, Len: 0
Source Port: 38828
Destination Port: 443
[Stream index: 0]
[TCP Segment Len: 0]
Sequence number: 1 (relative sequence number)
[Next sequence number: 1 (relative sequence number)]
Acknowledgment number: 1 (relative ack number)
1000 .... = Header Length: 32 bytes (8)
Flags: 0x010 (ACK)
000. .... .... = Reserved: Not set
...0 .... .... = Nonce: Not set
.... 0... .... = Congestion Window Reduced (CWR): Not set
.... .0.. .... = ECN-Echo: Not set
.... ..0. .... = Urgent: Not set
.... ...1 .... = Acknowledgment: Set
.... .... 0... = Push: Not set
.... .... .0.. = Reset: Not set
.... .... ..0. = Syn: Not set
.... .... ...0 = Fin: Not set
[TCP Flags: ·······A····]
Window size value: 296
[Calculated window size: 296]
[Window size scaling factor: -1 (unknown)]
Checksum: 0x08b0 [unverified]
[Checksum Status: Unverified]
Urgent pointer: 0
Options: (12 bytes), No-Operation (NOP), No-Operation (NOP), Timestamps
TCP Option - No-Operation (NOP)
Kind: No-Operation (1)
TCP Option - No-Operation (NOP)
Kind: No-Operation (1)
TCP Option - Timestamps: TSval 26210782, TSecr 2652693036
Kind: Time Stamp Option (8)
Timestamp value: 26210782
Timestamp echo reply: 2652693036
[Time since first frame in this TCP stream: 0.000000000 seconds]
[Time since previous frame in this TCP stream: 0.000000000 seconds]
0000 fc ec da 49 e0 10 38 ba f8 12 7d bb 08 00 45 00 ...I..8...}...E.
0010 00 34 d9 8c 40 00 40 06 55 3f c0 a8 04 60 34 15 .4..@.@.U?...`4.
0020 12 db 97 ac 01 bb e3 42 2a 57 83 49 c2 ea 80 10 .......B*W.I....
0030 01 28 08 b0 00 00 01 01 08 0a 01 8f f1 de 9e 1c .(..............
0040 e2 2c
You can see Wireshark understands what it is seeing. It decodes the IP header and the TCP header.
So far so good. Here is an example of the weird traffic I was seeing.
Here is what Tshark thinks of it.
$ tshark -Vx -r frame4241.pcap
Frame 1: 66 bytes on wire (528 bits), 66 bytes captured (528 bits)
Encapsulation type: Ethernet (1)
Arrival Time: May 7, 2019 18:19:10.073296000 UTC
[Time shift for this packet: 0.000000000 seconds]
Epoch Time: 1557253150.073296000 seconds
[Time delta from previous captured frame: 0.000000000 seconds]
[Time delta from previous displayed frame: 0.000000000 seconds]
[Time since reference or first frame: 0.000000000 seconds]
Frame Number: 1
Frame Length: 66 bytes (528 bits)
Capture Length: 66 bytes (528 bits)
[Frame is marked: False]
[Frame is ignored: False]
[Protocols in frame: eth:llc:data]
IEEE 802.3 Ethernet
Destination: Ubiquiti_49:e0:10 (fc:ec:da:49:e0:10)
Address: Ubiquiti_49:e0:10 (fc:ec:da:49:e0:10)
.... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)
.... ...0 .... .... .... .... = IG bit: Individual address (unicast)
Source: IntelCor_12:7d:bb (38:ba:f8:12:7d:bb)
Address: IntelCor_12:7d:bb (38:ba:f8:12:7d:bb)
.... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)
.... ...0 .... .... .... .... = IG bit: Individual address (unicast)
[Expert Info (Error/Malformed): Length field value goes past the end of the payload]
[Length field value goes past the end of the payload]
[Severity level: Error]
DSAP: Unknown (0x45)
0100 010. = SAP: Unknown
.... ...1 = IG Bit: Group
SSAP: LLC Sub-Layer Management (0x02)
0000 001. = SAP: LLC Sub-Layer Management
.... ...0 = CR Bit: Command
Control field: U, func=Unknown (0x0B)
000. 10.. = Command: Unknown (0x02)
.... ..11 = Frame type: Unnumbered frame (0x3)
Data (49 bytes)
0000 fc ec da 49 e0 10 38 ba f8 12 7d bb 00 38 45 02 ...I..8...}..8E.
0010 0b 84 d9 8d 86 b5 40 06 49 ee c0 a8 04 60 34 15 ......@.I....`4.
0020 12 db 97 ac 0d 0b e3 42 2a 57 83 49 c2 ea c8 ec .......B*W.I....
0030 01 28 17 6f 00 00 01 01 08 0a 01 8f f1 de ed 7f .(.o............
0040 a5 4a .J
What's the problem? This frame begins with 6 bytes of the destination MAC address and 6 bytes of the source MAC address, as we saw before. However, the next two bytes are 0x0038, which is not the same as the Ethertype of 0x0800 we saw earlier. 0x0038 is decimal 56, which would seem to indicate a frame length (even though the frame here is a total of 66 bytes).
Wireshark decides to treat this frame as not being Ethernet II, but instead as IEEE 802.3 Ethernet. I had to refer to appendix A of my first book to see what this meant.
For comparison, here is the frame format for Ethernet II (page 664):
Here is the frame format for IEEE 802.3 Ethernet.
This is much more complicated: Dst MAC, Src MAC, length, and then DSAP, SSAP, Control, and data.
It turns out that this format doesn't seem to fit what is happening in frame 4241, either. While the length field appears to be in the ballpark, Wireshark's assumption that the next bytes are DSAP, SSAP, Control, and data doesn't fit. The clue for me was seeing that 0x45 followed the presumed length field. I recognized 0x45 as the beginning of an IP header, with 4 meaning IPv4 and 5 meaning 5 words (40 bytes) in the IP header.
If we take a manual byte-by-byte comparative approach we can better understand what may be happening with these two frames. (I broke the 0x45 byte into two "nibbles" in one case.)
Note that I have bolded the parts of each frame that are exactly the same.
This analysis shows that these two frames are very similar, especially in places where I would not expect them to be similar. This caused me to hypothesize that frame 4241 was a corrupted version of frame 4238.
I can believe that the frames would share MAC addresses, IP addresses, and certain IP and TCP defaults. However, it is unusual for them to have the same high source ports (38828) but not the same destination ports (443 and 3339). Very telling is the fact that they have the same TCP sequence and acknowledgement numbers. They also share the same source timestamp.
Notice one field that I did not bold, because they are not identical -- the IP ID value. Frame 4238 has 0xd98c and frame 4241 has 0xd98d. The perfectly incremented IP ID prompted me to believe that frame 4241 is a corrupted retransmission, at the IP layer, of the same TCP segment.
However, I really don't know what to think. These frames were captured in a Linux 16.04 VirtualBox VM by netsniff-ng. Is this a problem with netsniff-ng, or Linux, or VirtualBox, or the Linux host operating system running VirtualBox?
I'd like to thank the folks at ask.wireshark.org for their assistance with my attempts to decode this (and other) frames as 802.3 raw Ethernet. What's that? It's basically a format that Novell used with IPX, where the frame is Dst MAC, Src MAC, length, data.
I wanted to see if I could tell Wireshark to decode the odd frames as 802.3 raw Ethernet, rather than IEEE 802.3 Ethernet with LLC headers.
Sake Blok helpfully suggested I change the pcap's link layer type to User0, and then tell Wireshark how to interpret the frames. I did it this way, per his direction:
$ editcap -T user0 excerpt.pcap excerpt-user0.pcap
Next I opened the trace in Wireshark and saw frame 4241 (here listed as frame 3) as shown below:
DLT 147 corresponds to the link layer type for User0. Wireshark doesn't know how to handle it. We fix that by right-clicking on the yellow field and selecting Protocol Preferences -> Open DLT User preferences:
You can see Wireshark is now making sense of the IP header, but it doesn't know how to handle the TCP header which follows. I tried different values and options to see if I could get Wireshark to understand the TCP header too, but this went far enough for my purposes.
The bottom line is that I believe there is some sort of packet capture problem, either with the softare used or the traffic that is presented to the software by the bridged NIC created by VirtualBox. As this is a lab environment and the traffic is 1% of the overall capture, I am not worried about the results.
I am fairly sure that the weird traffic is not on the wire. I tried capturing on the host OS sniffing NIC and did not see anything resembling this traffic.
Have you seen anything like this? Let me know in a comment here on on Twitter.
PS: I found the frame.number=X Wireshark display filter helpful, along with the frame.len>Y display filter, when researching this activity.
The DBIR has evolved since its initial release in 2008, when it was payment card data breach and Verizon breach investigations data focused. This year’s DBIR involved the analysis of 41,686 security incidents from 66 global data sources in addition to Verizon. The analysed findings are expertly presented over 77 pages, using simple charts supported by ‘plain English’ astute explanations, reason why then, the DBIR is one of the most quoted reports in presentations and within industry sales collateral.
DBIR 2019 Key Takeaways
- Financial gain remains the most common motivate behind data breaches (71%)
- 43% of breaches occurred at small businesses
- A third (32%) of breaches involved phishing
- The nation-state threat is increasing, with 23% of breaches by nation-state actors
- More than half (56%) of data breaches took months or longer to discover
- Ransomware remains a major threat, and is the second most common type of malware reported
- Business executives are increasingly targeted with social engineering, attacks such as phishing\BEC
- Crypto-mining malware accounts for less than 5% of data breaches, despite the publicity it didn’t make the top ten malware listed in the report
- Espionage is a key motivation behind a quarter of data breaches
- 60 million records breached due to misconfigured cloud service buckets
- Continued reduction in payment card point of sale breaches
- The hacktivist threat remains low, the increase of hacktivist attacks report in DBIR 2012 report appears to be a one-off spike
According to the 2019 Verizon Data Breach Investigations Report, there was a noticeable shift toward financially motivated crime (80 percent), with 35 percent of all breaches occurring as a result of human error, and approximately one quarter of breaches occurring through web application attacks. These attacks were mostly attributable to the use of stolen credentials used to access cloud-based email.
Another fun fact: social engineering attacks are increasingly more successful, and the primary target is the C-suite. These executives are 12x more likely to be targeted than other members of an organization, and 9x more likely to be the target of these social breaches than previous years. Verizon notes that a successful pretexting attack on a senior executive helps them to hit the jackpot, as 12 percent of all breaches analyzed occurred for financially motivated reasons, and their approval authority and privileged access to critical systems often goes unchallenged.
“Typically time-starved and under pressure to deliver, senior executives quickly review and click on emails prior to moving on to the next (or have assistants managing email on their behalf), making suspicious emails more likely to get through,” the Verizon DBIR states. “The increasing success of social attacks such as business email compromises (BECs, which represent 370 incidents or 248 confirmed breaches of those analyzed), can be linked to the unhealthy combination of a stressful business environment combined with a lack of focused education on the risks of cybercrime.”
Retailers Are Most Vulnerable at the Application Layer
The good news for consumers and retailers alike are that the days of POS compromises or skimmers at the gas-pump appear to be numbered, as these card breaches continue to decline in this report. The not-so-good news is that these attacks are, instead, primarily occurring against e-commerce payment applications and web application attacks. Indeed, the report shows that web applications, privilege misuse, and miscellaneous errors make up 81 percent of breaches for retail organizations.
What’s more, 62 percent of breaches and 39 percent of incidents occur at the web application layer. While it is unclear exactly how the web applications were compromised in some cases, it’s assumed that attackers are scanning for specific web app vulnerabilities, exploiting them to gain access, inserting some kind of malware, and harvesting payment card data to create a profit.
The report notes, “We have seen webshell backdoors involved in between the initial hack and introduction of malware in prior breaches. While that action was not recorded in significant numbers in this data set, it is an additional breadcrumb to look for in detection efforts. In brief, vulnerable internet-facing e-commerce applications provide an avenue for efficient, automated, and scalable attacks. And there are criminal groups that specialize in these types of attacks that feast on low-hanging fruit.”
Overall, Veracode’s State of Software Security Vol. 9 shows that retail organizations are quick to fix their flaws, ranking second in this regard as compared to other industries. With this in mind, it may mean that retail organizations need to keep a closer eye on third-party software and open source code in their own applications to ensure they’re not the next to sign a cyberattacker’s paycheck.
At Veracode, we help our customers to ensure that every web application in their portfolio is secure through each stage of the SDLC. Check out this case study to learn about how Blue Prism implemented Veracode Verified to ensure the strength of its application security program and protect its most sensitive data.
Google I/O this week you are going to hear about a lot of new features in Android that are coming in Q. One thing that you will also hear about is how every new Android release comes with dozens of security and privacy enhancements. We have been continually investing in our layered security approach which is also referred to as“ defense-in-depth”. These defenses start with hardware-based security, moving up the stack to the Linux kernel with app sandboxing. On top of that, we provide built-in security services designed to protect against malware and phishing.
However layered security doesn’t just apply to the technology. It also applies to the people and the process. Both Android and Chrome OS have dedicated security teams who are tasked with continually enhancing the security of these operating systems through new features and anti-exploitation techniques. In addition, each team leverages a mature and comprehensive security development lifecycle process to ensure that security is always part of the process and not an afterthought.
Secure by design is not the only thing that Android and Chrome OS have in common. Both operating systems also share numerous key security concepts, including:
- Heavily relying on hardware based security for things like rollback prevention and verified boot
- Continued investment in anti-exploitation techniques so that a bug or vulnerability does not become exploitable
- Implementing two copies of the OS in order to support seamless updates that run in the background and notify the user when the device is ready to boot the new version
- Splitting up feature and security updates and providing a frequent cadence of security updates
- Providing built-in anti-malware and anti-phishing solutions through Google Play Protect and Google Safe Browsing
- Android 9 (Pie) scored “strong” in 26 out of 30 categories
- Pixel 3 with Titan M received “strong” ratings in 27 of the 30 categories, and had the most “strong” ratings in the built-in security section out of all devices evaluated (15 out of 17)
- Chrome OS was added in this year's report and received strong ratings in 27 of the 30 categories.
You can see a breakdown of all of the categories in the table below:
Take a look at all of the great security and privacy enhancements that came in Pie by reading Android Pie à la mode: Security & Privacy. Also be sure to live stream our Android Q security update at Google IO titled: Security on Android: What's Next on Thursday at 8:30am Pacific Time.
Denial of Service (DoS) attacks are still very much in vogue with cybercriminals. They are used for extortion attempts, to attack competitors or detractors, as an ideological statement, as a service for hire, or simply “for teh lulz.” As anti-DoS methods become more sophisticated so do the DoS techniques, becoming harder to stop or take down by turning into distributed (DDoS) among stolen or hacked end-points. Some DDoS methods even use distributed, public systems that aren’t hacked or stolen, but still offer a means for a reflected attack (DrDoS) such as the widespread Network Time Protocol (NTP) DrDoS attacks seen over the past several years.
In the spirit of discovering and exposing potential future cybercrime methods, this research focuses on determining the viability of DrDoS attacks using public-facing email validation protocols. With knowledge of attack anatomy white hats can better understand the threat landscape while building their unique threat models, and if need be, build and configure defenses against such potential protocol abuses. Fortunately, or unfortunately, depending on your reference point, the findings of this research conclude that these types of attacks are likely not to be a widespread threat given the current sets of in-the-wild email server configurations; though this may change in the future as more systems come online and configuration habits shift.
We know what sort of returns we can get for DDoS leveraging SPF in large part through the work of Douglas Otis. However, given other DDoS vectors available (DNS, NTP, etc.) using SPF alone doesn’t have much of a bite. The idea here was to try and also leverage other email validation protocols that may be configured for a mail server also employing SPF, a stacked attack. Following a review of the DomainKeys Identified Mail (DKIM) protocol RFC it was discovered that there are instances where the specification suggests using reply codes: 4xx, 451/4.7.5, and 550/5.7.x specifically. This suggests mail server configurations that may reply to messages that meet, or fail, certain criteria.
However, of the 20 in-the-wild sample servers (located in the United States, France, Germany, Hungary, and Taiwan), zero responded to invalid DKIM headers. As with the DKIM RFC, the Domain-based Message Authentication, Reporting, and Conformance (DMARC) protocol RFC has a configuration suggestion for issuing a 5xy reply code for failed messages as well as a security discussion for External Reporting features of DMARC. Both of these vectors seemed promising for possible exploitation. Of the 20 in-the-wild servers tested, (located in the United States, the United Kingdom, France, Canada, and Switzerland) only four replied with a failure code and zero offered External Reporting services.
While subject to future change, these findings suggest that the current, real-world landscape does not lend itself to leveraging these validation protocols for any serious volume of DrDoS.
Just like in any good relationship, it takes time to get to know one another. Even when you’ve been together for a while, you still may learn new things that surprise you. It’s no different when you begin a relationship with a new product or solution. Over time, you will discover new features and tricks you didn’t even know existed. With this in mind, we’ve compiled a list of the top 15 things every customer should know about Core Impact. Take a look and see what you may have been missing.
#1: The Core Impact Customer Community
The Core Impact Customer Community is a place you can go to ask and answer questions about Impact and penetration testing, chat real-time with other Impact users, and take training courses to better leverage Impact for multiple types of testing. It also serves as a repository where you can post or download custom modules. This invaluable community resource exists to empower you to continue to get the most out of Impact.
#2: Flexible Licensing
Did you know that Impact has a flexible licensing model? We have many different license types that enable flexible use of the product, and ensure we can support multiple use cases, including:
- Machine-based unlimited licenses for those with a small, rotating team
- Named user unlimited licenses for those with dedicated, full time users
- Educational and lab licenses for those who want to use Impact in an educational capacity or tightly controlled lab environments
Our goal is to make sure you get the right combination of licenses that will work best for you and your team.
#3: Encrypting Agent Communications
All communication between Impact and its agents is both encrypted and authenticated. These robust protections allow us to provide secure communications between Impact and its agents. Other solutions have a higher risk of potential attackers ‘breaking in’ to the communications or hijacking their agents for nefarious purposes. Perform better, more detailed testing with the peace of mind that your communications will remain secure.
#4: Command and Control Options
Core Impact has a variety of command and control options that you can leverage. Whether connecting to or from a target or hiding the communications in DNS traffic, Impact has a variety of communication methods to better support different ways you might want to test. For example, using the DNS channel allows you to mask and disguise the communications inside DNS packets. All you have to do is select the type of communication you want the agents to use, and then deploy them. Every communication method features encryption and mutual authentication between Impact and its agents.
#5: Self-Terminating Agents
With Impact, you never have to worry about an agent hanging around longer than you want. Impact agents are configured to automatically clean themselves up at a time you set. Plus, Impact gives you the ability to set an expiration time when you deploy an agent, giving you control and minimizing artifacts left by your test. Even if a target is hibernated during a test, and misses the cleanup signal, Impact agents will see that it’s past due and clean itself up. You can pen test with confidence and know that Impact won’t let you be the reason for an incident response.
#6: Rapid Penetration Tests
Another great feature is that Impact can quickly find ‘low hanging fruit’ for you to act upon. Impact’s rapid penetration testing wizards can automatically find common weaknesses, while letting you choose how risky you want to be. This will free up time for you to do more in-depth testing and can even provide a short list of items to quickly prioritize for remediation.
#7: Intelligently Exploit Identities
Were you aware that Impact also enables you to easily leverage identities found during a test? With many identities in any given network, chances are you will come across them during testing. Impact enables you to securely store these identities. With Impact’s central identity store, it’s simple to use these identities to further your testing, allowing you to easily move and get access to more information.
#8: Stealthy PowerShell Attacks
Did you know that Impact can natively leverage PowerShell on remote hosts? Not just that, it can also do it stealthily, without using the PowerShell executable. PowerShell is a very powerful management framework for Windows machines and Impact’s ability to easily interface with it opens state-of-the-art attack methods preferred by advanced adversaries.
#9: Phishing Built for Pen Testers
Impact actually evolved from the suite of tools used by one of the first teams to offer third-party pen testing. In fact, Impact was created by a team of pen testing professionals to help make them more effective and efficient at their job. They recognized that there was great value in standardizing the process of how to conduct a pen test, and built this into their tools. As a result, Impact emphasizes an easy-to-use, repeatable, and consistent methodology.
Impact also has extensive phishing capabilities, built from the beginning with pen testing in mind, so you can do more than just report on who is susceptible to phishing. You can also gather additional information to help plan further testing and exploitation activities. Impact’s phishing functionality is often leveraged to ‘trick’ victims into giving you access to the network. If you are looking for pen testing with focused phishing capabilities, Impact is definitely the solution for you.
#10: A Python Framework
Here is something you may not know either: Impact is actually a Python framework. All modules, exploits, and tools are written in Python and are user customizable. You can write your own modules for things like integrations with third party tools, or modify existing ones to better suit your specific needs. This gives you a significant amount of flexibility to extend and enhance the value of investments you have already made.
#11: Ongoing Logging and Reporting
Another key feature to be aware of is that Impact automatically logs everything you do over the course of your pen test. This includes all the modules you run, all the files you upload or download, and even all the commands you run on remote hosts. Impact automatically captures this input and output, providing an audit trail and ensuring that you do not have to keep your own detailed notes during the test.
Impact also has a powerful and flexible built-in reporting engine that allows you to create reports for any type of audience, whether they are Chief Executives, the Patching Team or even the Audit Team. These reports are also fully customizable and the templates can be saved for future use.
#12: Validating Vulnerability Scans
Impact automatically validates the results of a vulnerability scan. You can import the results from the most vulnerable scanners and Impact will automatically attempt to validate the scanner’s findings by attempting to exploit the vulnerabilities that were reported. You will then get a report of what Impact was and was not able to exploit. Confirming exploitations can help speed up remediation processes by having Impact prioritize the list of vulnerabilities that your scanners are spitting out.
#13: Validating Remediation
With the remediation validation option, you can have Impact automatically re-run a previous pen test that can provide a change report on any differences between the two. Impact will execute exactly as you did on the previous test, including info gathering, exploitation, and pivoting. You can use this to easily test if remediation efforts have been successful rather than having to do the entire test over again, saving tremendous time in re-testing.
#14: Multi-Vector Pivoting
Impact also enables you to pivot from one vector to another, dramatically improving your capabilities and efficiency through multi-vector pivoting. For example, when you exploit a weakness in a web application, you can then leverage it to pivot to the network. Or you can even leverage Impact to trick victims into giving you access to the network.
#15: Moving from One Host to Another
And last, but definitely not least, Impact makes it easy to pivot from one host to another. It is as simple as a right click. Impact has a wealth of additional features, like the Remote Interface, which you can leverage with the pivoting capabilities to make you more efficient and effective during your testing.
Getting the Most Out of Core Impact
This list will help you more intelligently manage your vulnerabilities and get the most out of Impact. After all, the more you get to know Core Impact, the more it can do to secure your business.
Cyber criminals are a part of modern life, from Uber account hacks to major business data breaches, our online identities are rarely safe. And, while big-name companies under threat often make the news, it’s small and medium-sized enterprises who are actually their biggest targets.
Large businesses and government departments may seem like more obvious hacking targets with bigger payoffs, but these organisations can afford much more robust, well-kept and successful IT security measures and cyber security professionals working round the clock. Due to this, cyber criminals are much more likely to swing for easy targets like family businesses.
With the introduction of GDPR across Europe, all businesses are now much more responsible for the personal data they keep, meaning companies of all size can’t really afford to not have at least the basic security measures in place. The UK National Cyber Security Centre (NCSC) have created a list of five principles as part of their Cyber Essentials Scheme. These include:
1. Secure your internet connection
2. Protect from viruses and other malware
3. Control access to your data and services
4. Secure your devices and software
5. Keep your devices and software up to date
All small businesses should know these principles and be putting them into practice, no matter how many staff they employ. In addition to this, here are a couple of other tips to keep hackers at bay which can be simply implemented into your business practices and keep the ICO (Information Commissioner’s Office) from the door.
Invest in Software and Hardware
While just functioning from day to day might be your only priority as a small business owner, investing in your technology will undoubtedly help in the long run. Keeping your software, such as virus software and operation systems, will ensure that any vulnerabilities identified by the creators are covered and there are no gaping holes in your cyber defences.
It might also be a good idea to invest in a good-quality back-up server and cyber insurance, so that if any personal data is every compromised, your operations can simply switch to the back-up server without affecting your business. Cyber insurance will also help keep you covered in case any clients’ personal data is lost and costs are incurred.
Staff Awareness Without the awareness of your staff, no manner of cyber security measures will keep your business safe. 90% of breaches happen because of user interaction, most commonly through phishing scams. Sophisticated phishers can impersonate senior members of staff in your organisation and trick other employees into handing over login details, authorising bogus payments or redirecting bank transfers.
Ensuring that staff are made aware of how to identify phishing scams and even having experienced trainers come in to guide them through cyber security best practice may seem like a cost you can spare but will go far in keeping the walls around your business impenetrable.
The GDPR states that businesses who suffer a breach must alert the ICO and any customers who may have been affected within 72 hours of discovery. This is vital, and although fines could still be handed out for failure to prevent a breach, these fines will be much higher if the ICO discovers that you kept the information to yourself for longer than the 72 hour period.
The average time it takes for an organisation to discover a breach is 229 days, so the actual time it takes for the breach to come to your attention isn’t going to work too poorly in your favour. However, regular reporting is likely to result in earlier identification which will not only help you save time and money, but will also be a great trust signal to your clients that you take protecting their data seriously.
Security breaches are a ‘when’ not ‘if’ problem, so planning ahead is a necessity of modern business. 74% of SMEs don’t have any money saved to deal with an attack and 40% wouldn’t even know who to contact in the event of a breach. Having comprehensive disaster management plans in place will help keep you and your clients safe, keep your reputation in top shape and make sure you don’t have to pay out major money in the worst case scenario.
Plan of Action
The best thing for SMEs to do is to start small and keep building their defences as time goes on, helping keep costs down and customers happy. Here’s a plan of action to get started:
1. Start with the basics: follow the Cyber Essentials Scheme and bake these principles into your daily operations
2. Get an understanding of the risks to your business: check out the NCSC’s ’10 Steps to Cyber Security’ for further detail than the Cyber Essentials
3. Know your business: if you still feel your data isn’t safe, research more comprehensive frameworks like the IASME standard developed for small businesses
4. Once you have a complete security framework in place, develop on the NCSC’s advice with more sophisticated frameworks, such as the NIST framework for cybersecurity.
June has become the month where we’re inviting thousands of security aficionados to put their skills to the test...
In 2018, 23,563 people submitted at least one flag on their hunt for the secret cake recipe in the Beginner’s Quest. While 330 teams competed for a place in the CTF Finals, the lucky 10 winning teams got a trip to London to play with fancy tools, solve mysterious videos and dine in Churchill’s old chambers.
This June, we will be hosting our fourth-annual Capture the Flag event. Teams of security researchers will again come together from all over the globe for one weekend to eat, sleep and breathe security puzzles and challenges - some of them working together around the clock to solve some of the toughest security challenges on the planet.
Up for grabs this year is $31,337.00 in prize money and the title of Google CTF Champion.
Ready? Here are the details:
- The qualification round will take place online Sat/Sun June 22 and 23 2019
- The top 10 teams will qualify for the onsite final (location and details coming soon)
- Players from the Beginner's Quest can enter the draw for 10 tickets to witness the Google CTF finals
- Defence Secretary Gavin Williamson sacked over Huawei leak
- Daily Telegraph publishes details of a meeting about using the Chinese telecoms firm to help build the UK's 5G network
- Huawei row: Inquiry to be held into National Security Council leak
- Is Huawei a Threat to UK National Security?
- What's the greater risk to UK 5G, Huawei backdoors or DDoS?
- Backdoors found in Huawei-supplied Vodafone equipment between 2011 and 2012
- Microsoft researchers find NSA-style backdoor in Huawei laptops
- 5G cyber-attack: What would be the effect on the UK?
- Huawei: Why UK is at odds with its cyber-allies
- NCSC: Huawei threat to national security
A survey by the NCSC concluded most UK users are still using weak passwords. Released just before CyberUK 2019 conference in Glasgow, which I was unable attend due work commitments, said the most common password on breached accounts was"123456", used by 23.2 million accounts worldwide. Next on the list was "123456789" and "qwerty", "password" and "1111111". Liverpool was the most common Premier League Football team used as a password, with Blink 182 the most common music act. The NCSC also published a separate analysis of the 100,000 most commonly re-occurring passwords that have been accessed by third parties in global cyber breaches. So password still remains the biggest Achilles' heel with our security.
The UK hacktivist threat came back to the fore this month, after the Anonymous Group took revenge on the UK government for arresting WikiLeaks founder Julian Assange, by attacking Yorkshire Councils. I am not sure what Yorkshire link with Assange actually is, but the website for Barnsley Council was taken down by a DDoS attack, a tweet from the group CyberGhost404 linked to the crashed Barnsley Council website and said "Free Assange or chaos is coming for you!". A tweet from an account called 'Anonymous Espana' with an image, suggested they had access to Bedale Council's confidential files, and were threatening to leak them.
Finally, but not lest, a great report by Recorded Future on the raise of the dark web business of credential stuffing, titled "The Economy of Credential Stuffing Attacks". The report explains how low-level criminals use automated 'checkers' tools to validate compromised credentials, before selling them on.
I am aware of school children getting sucked into this illicit world, typically starts with them seeking to take over better online game accounts after their own account is compromised, they quickly end up with more money than they can spend. Aside from keeping an eye on what your children are up to online as a parent, it goes to underline the importance of using unique complex passwords with every web account (use a password manager or vault to help you - see password security section on the Security Expert website). And always use Multi-Factor Authentication where available, and if you suspect or have are informed your account 'may' have compromised, change your password straight away.
- How Business can address the Security Concerns of Online Shoppers
- Third Party Security Risks to Consider and Manage
- Huawei to be given limited access to UK 5G Network
- The NCSC launches Cyber Security tool for UK Businesses and Authorities
- German Drug Manufacturer Beyer hit by Malware Attack originating from China
- Aebi Schmidt latest Manufacturer dealing with Ransomware Cyberattack
- 540M Facebook Member Records exposed by an Unsecure AWS S3 Bucket
- Microsoft will drop Password Expiration Policies in Windows 10 and in Windows Server
- 'Assange Supporters’ Claim to Hack Yorkshire Councils
- Hackers beat University Cyber-Defences in Two Hours
- App leaves over 2 Million WiFi Network Passwords Exposed on Open Database
- Two in Three Hotel Websites Leak Guest Booking Details and Allow Access to Personal Data
- Yahoo to pay £90M in latest settlement of Massive Breach
- Hackers nab emails and more in Microsoft Outlook, Hotmail, and MSN Compromise
- 4 in 5 IT Chiefs are delaying Security Patches to avoid Business Disruption
- A Public Database Exposed the Medical Records of 150,000 Rehab Patients
- Amnesty Intl. says Cyberattack on Hong Kong office appears linked to known APT group
- Cyber-Attacks ‘Damage’ National Infrastructure
- Microsoft Patches 75 Vulnerabilities, including 14 Critical for Windows, IE\Edge, Chakra and Adobe Flash
- Adobe Releases fixes 21 Vulnerabilities in Acrobat and Acrobat Reader
- Machines running popular AV software go unresponsive after Microsoft Windows update
- Apache Tomcat Vulnerability Results in Remote Code Execution
- Adobe’s Patch Tuesday includes Security Updates for Flash Player and AIR
- Attackers Exploit WordPress Zero Day following Disclosure
- WinRAR Exploit used by MuddyWater APT phishing gang
- ISC Patches Three Vulnerabilities in BIND
- Flawed P2P technology Threatens Millions of IoT Devices
- The Economy of Credential Stuffing Attacks
- ShadowHammer code Found in several Video Games
- Researchers uncover new ‘TajMahal’ APT framework, plus a new Gaza Cybergang malware campaign
- Baldr Stealer Malware Active in the Wild With ongoing Updates
- TA505 Targets Financial and Retail using 'Undetectable' Methods
- Lazarus Targets Mac Users With Malware
- Attackers Deploy New ICS Attack Framework “TRITON” and Cause Operational Disruption to Critical Infrastructure