Monthly Archives: January 2019

NBlog Feb 1st – awareness module on mistakes

Security awareness and training programs are primarily concerned with incidents involving deliberate or intentional threats such as hackers and malware. In February, we take a look at mistakes, errors, accidents and other situations that inadvertently cause problems with the integrity of information, such as:
  • Typos;
  • Using inaccurate data, often without realizing it;
  • Having to make decisions based on incomplete and/or out-of-date information;
  • Mistakes when designing, developing, using and administering IT systems, including those that create or expose vulnerabilities to further incidents (such as hacks and malware);
  • Misunderstandings, untrustworthiness, unreliability etc. harming the organization’s reputation and its business relationships.
Mistakes are far more numerous than hacks and malware infections but thankfully most are trivial or inconsequential, and many are spotted and corrected before any damage is done. However, serious incidents involving inaccurate or incomplete information do occur occasionally, reminding us (after the fact!) to be more careful about what we are doing. 
The NoticeBored awareness and training materials take a more proactive angle, encouraging workers to take more care with information especially when handling (providing, communicating, processing or using) particularly important business- or safety-critical information – when the information risks are greater.

Learning objectives

  • Introduces the topic, describing the context and relevance of 'mistakes' to information risk and security;
  • Expands on the associated information risks and typical information security controls to cut down on mistakes involving information;
  • Offers straightforward information and pragmatic advice, motivating people to think - and most of all act – so as to reduce the number and severity of mistakes involving information;
  • Fosters a corporate culture of error-intolerance through greater awareness, accountability and a focus on information quality and integrity.
NoticeBored subscribers are encouraged to customize the content supplied, adapting both the look-and-feel (the logo, style, formatting etc.) to suit their awareness program’s branding, and the content to fit their information risk, security and business situations. Subscribers are free to incorporate additional content from other sources, or to cut-and-paste selections from the NoticeBored materials into staff newsletters, internal company magazines, management reports etc. making the best possible use of the awareness content supplied.

So what about your learning objectives in relation to mistakes, errors etc. Does your organization have persistent problems in this area? Is this an issue that deserves greater attention from staff and management, perhaps in one or more departments, sites/business units or teams? Have mistakes with information ever led to significant incidents? What have you actually done to address the risk?

HINT: Don't be surprised if the same methods lead to the same results. "The successful man will profit from his mistakes ... and try again in a different way" [Dale Carnegie]NoticeBored is different

Playbook Fridays: Query Cymon.io API

This Playbook queries Cymon.io API, which tracks malware, phishing, botnets, spam, and more

ThreatConnect developed the Playbooks capability to help analysts automate time consuming and repetitive tasks so they can focus on what is most important. And in many cases, to ensure the analysis process can occur consistently and in real time, without human intervention.

Happy Friday! This Friday, we are featuring a Playbook which queries Cymon’s API. Cymon, run by eSentire, is an open service which tracks “malware, phishing, botnets, spam, and more” (from https://cymon.io/).

The Playbook is pretty simple:

The Playbook starts with a user-action trigger (which means you can trigger this Playbook from an indicator’s page).

The Playbook then determines whether the given data is an IP Address indicator or a host indicator, queries Cymon’s API, and returns the response to the indicator’s page so you can see the results with one click and without leaving the page! This Playbook does require a Cymon API Token which is stored as a keychain variable. You can register for a Cymon API Token here.

You can download the playbook from our Playbooks repository: https://github.com/ThreatConnect-Inc/threatconnect-playbooks/tree/master/playbooks/TCPB-UA-Cymon%20Query%20IP%20and%20Host. If you have any questions, feedback, or run into any problems, feel free to raise an issue.

Happy hunting!

The post Playbook Fridays: Query Cymon.io API appeared first on ThreatConnect | Intelligence-Driven Security Operations.

Unchecked open source components introducing more risk to businesses

At Veracode, we’ve been the first and the loudest in proclaiming that companies need to be vigilant in how they use open source components in their software.

Our research shows that open source components are used with increasing regularity in the enterprise. The State of Software Security Volume 9 report, which examined 700,000 scans over 12 months, found that 87.5 percent of Java applications had at least one vulnerability in a component. In addition, open source applications were found to be among the slowest of all applications to be fixed: developers remediated 25 percent of open source flaws after 93 days had passed following identification.

A separate recent industry report pointed to the fact that a vulnerable version of the open source Apache Struts library, the same vulnerable library that hackers accessed to steal information on millions of consumers, is still being downloaded and used by some of the most profitable and prominent global enterprises. In March 2017, a number of high profile targets were zapped by what we dubbed the “Struts-Shock” flaw. This critical vulnerability in the Apache Struts 2 library enables remote code execution (RCE) attacks using command injection, for which as many as 35 million sites were vulnerable. The bad guys exploited the vulnerability in a range of victims’ applications, most notably the Canada Revenue Agency and the University of Delaware, in a breach of records that USA Today reported could cost the organization as much as $19 million.

The fact that vulnerable software is still in use even after such damaging effects illustrates both the ubiquitous use of open source code in software applications worldwide and that the race to deploy and evolve applications is pushing companies to build software more quickly. As Veracode CTO Chris Wysopal wrote in Forbes, “The benefits of open source code can be so alluring that businesses can forget about the risks involved with using public, unvetted chunks of software throughout their applications. Vulnerabilities in open source code are prized by hackers simply because of the prevalence of their use.”

The open source conundrum for businesses is getting more complex: there are 5 million open source libraries now but the growth rate is exponential – we will see millions more developers releasing up to half a billion libraries within the next decade. This increases the threat vector for businesses that use open source in their applications because while open source creates efficiency, developers also inherit vulnerabilities in the components they use.

Scanning code to reveal flaws and recommend fixes to developers is critical. As organizations tackle bug-ridden components, they should consider not just the open flaws within libraries and frameworks, but also how they are using those components. By understanding not just the status of the component, but whether or not a vulnerable method is being called, organizations can pinpoint their component risk and prioritize fixes based on the riskiest uses of components.

To address the risk of open source vulnerabilities in the software supply chain, groups such as PCI, OWASP, and FS-ISAC now have specific controls and policies in place to govern the use of open source components. But for global enterprises with multiple and vast repositories of code, identifying all the applications where open source vulnerabilities may exist can be difficult.

That’s where Veracode comes in. Our solution allows enterprises to quickly identify every application with vulnerable components, making it easy to address open source vulnerabilities and continue realizing the benefits of open source software.

When news breaks about new open source vulnerabilities, Veracode helps you quickly identify which applications in your organization are vulnerable, saving time as you plan for remediation.

Veracode’s cloud-based platform scans software to identify both open source vulnerabilities and flaws in proprietary code with the same scan, providing greater visibility into security across the entire application landscape. During the mitigation process, Veracode’s team of experts supports your people, processes and technology, and coaches your engineers on secure coding practices and ways to manage mitigation and remediation.

Learn more about controlling your risk with the Veracode platform here.

CISO series: Talking cybersecurity with the board of directors

In todays threat landscape, boards of directors are more interested than ever before in their company’s cybersecurity strategy. If you want to maintain a boards confidence, you cant wait until after an attack to start talking to them about how you are securing the enterprise. You need to engage them in your strategy early and oftenwith the right level of technical detail, packaged in a way that gives the board exactly what they need to know, when they need to know it.

Cyberattacks have increased in frequency and size over the years, making cybersecurity as fundamental to the overall health of the business as financial and operational controls. Todays boards of directors know this, and they are asking their executive teams to provide more transparency on how their company manages cybersecurity risks. If you are a technology leader responsible for security, achieving your goals often includes building alignment with the board.

Bret Arsenault, corporate vice president and chief information security officer (CISO) for Microsoft, was a recent guest on our CISO Spotlight Series, where he shared several of his learnings on building a relationship with the board of directors. Weve distilled them down to the following three best practices:

  • Use the boards time effectively.
  • Keep the board educated on the state of cybersecurity.
  • Speak to the boards top concerns.

Use the boards time effectively

Members of your board come from a variety of different backgrounds, and they are responsible for all aspects of risk management for the business, not just security. Some board members may track the latest trends in security, but many wont. When its time to share your security update, you need to cut through all the other distractions and land your message. This means you will want to think almost as much about how you are going to share your information as what you are going to share, keeping in mind the following tips:

  • Be concise.
  • Avoid technical jargon.
  • Provide regular updates.

This doesnt mean you should dumb down your report or avoid important technical information. It means you need to adequately prepare. It may take several weeks to analyze internal security data, understand key trends, and distill it down to a 10-page report that can be presented in 30 to 60 minutes. Quarterly updates will help you learn what should be included in those 10 pages, and it will give you the opportunity to build on prior reports as the board gets more familiar with your strategy. No matter what, adequate planning can make a big difference in how your report is received.

Keep the board educated on the state of cybersecurity

Stories about security breaches get a lot of attention, and your board may hope you can prevent an attack from ever happening. A key aspect of your role is educating them on the reasons why no company will ever be 100 percent secure. The real differentiation is how effectively a company responds to and recovers from an inevitable incident.

You can also help your board understand the security landscape better with analysis of the latest security incidents and updates on cybersecurity regulations and legislation. Understanding these trends will help you align resources to best protect the company and stay compliant with regional security laws.

Speak to the boards top concerns

As you develop your content, keep in mind that the best way to get the boards attention is by aligning your messages to their top concerns. Many boards are focused on the following key questions:

  • How well is the company managing their risk posture?
  • What is the governance structure?
  • How is the company preparing for the future?

To address these questions, Bret sticks to the following talking points:

  • Technical debtAn ongoing analysis of legacy systems and technologies and their security vulnerabilities.
  • GovernanceAn accounting of how security practices and tools measure up against the security model the company is benchmarked against.
  • Accrued liabilityA strategy to future-proof the company to avoid additional debts and deficits.

When it comes to effectively working with the board and other executives across your organization, a CISO should focus on four primary functions: manage risk, oversee technical architecture, implement operational efficiency, and most importantly, enable the business. In the past, CISOs were completely focused on technical architecture. Good CISOs today, and those who want to be successful in the future, understand that they need to balance all four responsibilities.

Learn more

Be sure to check out the interview with Bret in Part 1 of the CISO Spotlight Series, Security is Everyones Business, to hear firsthand his recommendations for talking to the board. And in Part 2, Bret walks through how to talk about security attacks and risk management with the board.

The National Institute of Standards and Technology (NIST)Cybersecurity Framework is a great reference if you are searching for a benchmark model.

To read more blogs from the series, visit theCISO series page.

The post CISO series: Talking cybersecurity with the board of directors appeared first on Microsoft Secure.

Expanding the U.S. Press Freedom Tracker two years after launch

press freedom image
Estonian World

A little over 18 months ago, Freedom of the Press Foundation — in partnership with the Committee to Protect Journalists and several other prominent press rights organizations— launched the U.S. Press Freedom Tracker, a database and website that aims to systematically count press freedom violations in the US.

In fewer than two years, we have documented, reported on, and categorized close to 300 press freedom violations in the United States. This critical information — about journalists arrested, stopped at the border, surveilled, denied public access, and physically attacked — opens a window into some of the most urgent problems for journalists working in the United States. Many of these incidents were not reported elsewhere, and the Tracker is the only place where all such violations are documented in one place. (See our review of the 2018 data in this post by our Tracker correspondent Camille Fassett).

We’re pleased to announce we’re expanding the U.S. Press Freedom Tracker and making it a permanent part of the press freedom infrastructure in the United States.

First, we have a new managing editor, Kirstin McCudden. Kirstin has both a bachelor’s and master’s in journalism from the University of Missouri’s School of Journalism, and comes to us from Kansas City PBS, where she was managing editor of digital. Under her leadership, we’ll be expanding the number of correspondents around the country reporting cases, amplifying our reporting so our stories reach a wider audience, and exploring new ways we can bring these issues to the public.

This expansion means we can continue adapting to new press freedom threats as they arise. Recently, our correspondent Stephanie Sugars created a map of many of the hoax bomb threats aimed at news organizations in late 2018. Sadly, we have also started to track journalists killed in the United States, after the tragic events in Annapolis last year. In addition, we’re always looking at ways of better documenting systematic harassment of journalists or direct threats of physical harm that have increasingly been at the forefront in recent months.

As part of our plan to amplify the U.S. Press Freedom Tracker, we will begin to regularly send out updates regarding major press freedom threats to the Tracker’s newsletter subscribers. If you would like to receive these updates, please click here to subscribe, if you haven’t already.

We’re also thrilled to announce new funding and support for the Tracker, which not only ensures its sustainability but also casts a wider net to people that wouldn’t otherwise hear about this important project.

Our partners on the project, the Committee to Protect Journalists (CPJ), generously funded the launch and the first 18 months of the U.S. Press Freedom Tracker’s operations. CPJ will continue to provide significant financial support, and we’ve also received two other major grants to help cement and expand the project: Craig Newmark Philanthropies and Open Society Foundations. Thank you to all these organizations for funding this important work.

Finally, we’d like to hear from you. Are you a journalist who has had their rights violated? Please get in touch here, so we can document your case. Or maybe you’re a journalist who wants to use our data, or give us suggestions on what data we can collect that would be more useful. Please let us know. Or maybe you just want to stay better attuned to the variety of threats journalists face and what can be done about it. If so, please sign up for our newsletter.

Or if you’d like to donate to support the U.S. Press Freedom Tracker so we can sustain it for many years to come, please go here.

It’s always concerning when there are so many threats to press freedom that need documenting. But our hope is that by shining a light on it, we can help prevent more of these acts from happening in the future.


What You Need to Know About DNS Flag Day

This blog was written by Michael Schneider, Lead Product Manger.

The internet is built on Postel’s law, often referred to as the robustness principle: “Be conservative in what you do, be liberal in what you accept from others.” In the protocol world, this means that receivers will try to accept and interpret data that they receive to their best knowledge and will be flexible if the data doesn’t fully match a specification. Senders should adhere to specifications and comply with protocol specifications, as laid out in Request for Comment documents (RFCs) by the Internet Engineering Task Force.

DNS was released as RFC 1035 in 1987 and was superseded by EDNS in 1999 with RFCs 2671 and 6891. EDNS, or extension mechanisms for DNS, aimed to flexibly deploy new features into the DNS protocol, including protection against DNS flooding attacks amongst other performance and security enhancements. These attacks can cause a major outage for cloud-based infrastructure, which happened in 2016 with the DDoS attack on DNS provider Dyn.

To avoid such attacks and improve DNS efficiency, several DNS software and service providers—like Google, Cisco, and Cloudflare—have agreed to “coordinate removing accommodations for non-compliant DNS implementations from their software or service,” beginning Feb. 1, 2019, or DNS Flag Day.

Before DNS Flag Day, if an EDNS server requested a name resolution from a non-EDNS resolver, it would first send an EDNS query. If there was no response, the server would then send a legacy DNS query. That means that the timeout for the first query would need to be reached before the legacy DNS query was sent, generating a delayed response. These delays ultimately make DNS operations less efficient.

But with the new changes introduced for DNS Flag Day, any DNS server that doesn’t respond to EDNS will be seen as “dead” and no additional DNS query will be sent to that server. The result? Certain domains or offerings may no longer be available, as name resolution will fail. Organizations should plan to provide a bridge between their internal DNS and a provider’s DNS to ensure that the EDNS protocol is used. They should also work with their vendors to verify that EDNS is part of DNS communication and obtain a version of the respective product that complied with the requirements of EDNS.

The DNS Flag Day protocols are a disruptive move, as they break from Postel’s law—servers can no longer automatically accept every query. But as with most internet-related innovations, progress requires a little disruption.

The post What You Need to Know About DNS Flag Day appeared first on McAfee Blogs.

Teach Kids The 4Rs Critical for Online Safety on Safer Internet Day

What are you doing?”

Uploading pics of our school fest. And don’t peer over my shoulder, Aunty. I have already uploaded a few so check them out on your Instagram account.”

I beat a hasty retreat and did as instructed. The photos brought out a smile- such fresh, innocent faces of kids having a good time! But that feeling rapidly changed when I read the comments on one particular pic.

Now why are you frowning?” asked the niece.

Perhaps you shouldn’t have shared this one. It’s attracting rude comments. “

Instantly remorseful, the niece took down the picture, but I decided to nevertheless give her a talk on responsible posting.

On the occasion of Safer Internet Day (SID) 2019, let us find out what can make our digital world a happier and safer place, and our digital experience a more positive one.

There are many, like you my dear readers, well aware digital users who endeavor to take measures and ensure that your accounts are secure and devices safe. However, one needs to keep in mind that we are linked online, and therefore the key word is ‘together’. No single entity or product can guarantee 100% safety online, but together we can strive to bring about a better digital experience for all. That’s the theme for 2019 too – ‘Together for a better internet’.

Incidentally, McAfee too has a similar tagline, ‘Together is Power’, underlining the fact that it needs the collaboration of all players- digital users, organizations and vendors- to make cybersecurity effective.

Organizations lay down rules and monitor usage, vendors provide security tools and that leaves us, the users.  What can we do?

‘What can we do as parents?’ Let us start by helping our kids develop four critical skills – the 4Rs of online safety:

  • Respect– I treat myself and others the way I like to be treated
  • Responsibility – I am accountable for my actions and I take a stand when I feel something is wrong
  • Reasoning – I question what is real
  • Resilience – I get back up from tough situations

RESPECT

How do we teach what respect means? We respect those we love or admire. But we also need to learn to respect rules, people’s feelings and take a sympathetic view of differences in physical and emotional aspects of people.  The two values that this calls for are tolerance and empathy.

Here are a few ways you can teach kids respect:

  1. Appreciate when they are tactful and kind
  2. Correct them if they are mean
  3. Make it a family practice to use ‘sorry’, ‘please’, and ‘thank you’ a lot
  4. Role model respectful behavior like being silent in the library, sharing photos with permission, treating boys and girls as equals
  5. Set rules and specify penalties for breaching them

At the same time, help your kids identify undesirable behavior that may show disrespect and abuse.

  1. Being approached by strangers online who ask for photos, personal thoughts
  2. Being a witness to rude, aggressive behavior that causes anguish
  3. Being belittled for beliefs, appearance, race, gender
  4. Being challenged to perform a dare the child isn’t comfortable with

Resilience

Standing up to injustice and aggression as well as springing back to normalcy despite a negative experience is what resilience is about. Let’s accept it, bullies will continue to exist and so it is in the interest of the kids to know how to survive tough situations online. The recipe also calls for dollops of love, support, patience from the family and friends.

Actions that may lead to negative experiences:

  1. Cyberbullying
  2. Risky challenges
  3. Being ignored by peers online
  4. Befriending child groomers
  5. Falling prey to hackers and scammers

You know what to do, right? Teach them cybersafety practices; change account settings and passwords or even delete accounts if necessary; report scam and abuse; rope in teachers to stop bullying in school. Stand by your child. Encourage them to get back on their feet and resume normal life. Help them be tough and face the world- they will thank you for it.

Responsibility

We have often discussed responsible online behavior in these pages, so will not rehash it. Suffice to say that we are the digital space users, content generators and consumers. So, our actions online will ultimately affect us and those in contact with us and their contacts and so on and so forth, covering the entire digital populace. Practice STOP. THINK. CONNECT. SHARE.

Reasoning

We will do the kids a big favour if we can help them to think and act instead of following the herd mentality. Encourage them to question, to reason before accepting any online content to be true. Help them understand the reach and consequences of digital posts and ways to distinguish between a fake news and a real one. Kids have wonderful reasoning power and let us push them to exercise it fully.

What can we do as a community? I think South Korea has set a sterling example:

A civil activist group in South Korea, Sunfull Internet Peace Movement, initiated the “Internet Peace Prize” in 2018 to promote online etiquette and fight cyberbullying. The award went to two people from Japan for their effort to protect human rights by tackling cyberbullying. We can start something similar in our children’s school or our neighbourhood. Schools can set up cyber armies to identify and stop cyberbullying and offer support to victims. The possibilities are many.

Stay safe online everyday; it just calls for a little care. Just like in the real world.

Credits:

Office of the eSafety Commissioner, An Australian Government initiative

 

The post Teach Kids The 4Rs Critical for Online Safety on Safer Internet Day appeared first on McAfee Blogs.

Cheating Attempts and the OSCP

Last week, an individual started to release solutions to certain challenges in the OSCP certification exam. This led to some discussion on Twitter and made it clear to us that there is a fair amount of misunderstanding about what's on the exam, how we catch cheaters, how many people attempt to cheat, and what happens when they are discovered. In this post, we would like to shine some light on our certification process.

DARPA explores new computer architectures to fix security between systems

Solutions are needed to replace the archaic air-gapping of computers used to isolate and protect sensitive defense information, the U.S. Government has decided.

Air-gapping is the common practice of physically isolating data-storing computers from other systems, computers and networks so they theoretically can’t be compromised because there is nothing connecting the machines.

However, many say air-gapping is no longer practical, as the cloud and internet take a hold of massive swaths of data and communications.

To read this article in full, please click here

Information Security no longer the Department of “NO”

The information security function within business has gained the rather unfortunate reputation for being the department of “no”, often viewed as a blocker to IT innovation and business transformation. A department seen as out of touch with genuine business needs, and with the demands of evolving workforce demographic of increasing numbers of numbers Millennials and Centennials. However, new research by IDC\Capgemini reveals that attitudes are changing, and business leaders are increasingly relying on their Chief Information Security Officers (CISOs) to create meaningful business impact.


The study bears out a shift in executive perceptions that information security is indeed important to the business. With the modern CISO evolving from that of a responder, to a driver of change, enabling to build businesses to be secure by design. The survey found CISOs are now involved in 90% of significant business decisions, with 25% of business executives perceive CISOs as proactively enabling digital transformation, which is a key goal for 89% of organisations surveyed by IDC.

Key findings from the research include: 

  • Information security is a business differentiator – Business executives think the number one reason for information security is competitive advantage and differentiation, followed by business efficiency. Just 15% of business executives think information security is a blocker of innovation, indicating that information security is no longer the ‘department of no’ 
  • CISOs are now boardroom players – 80% of business executives and CISOs think their personal influence has improved in the last three years. CISOs are now involved in 90% of medium or high influence boardroom decisions 
  • CISOs must lead digital transformation efforts – At present, less than 25% of business executives think CISOs proactively enable digital transformation. To stay relevant, CISOs must become business enablers. They need to adopt business mindsets and push digital transformation forward, not react to it. CISOs that fail to adopt a business mindset will be replaced by more forward-thinking players.
From NO to GO
CISOs have made great leaps forward
  • Focused on making security operations effective and efficient 
  • Engaged with the rest of the business 
  • Seen as key SMEs to the board 
  • Responding to business requests and enabling change
 

CISOs now need to pivot to because business leaders
  • Need to be part of the business change ecosystem
  • Must be seen as drivers rather than responders
  • CISO as entrepreneur and innovator

US will map and disrupt North Korean botnet

The US government plans to turn the tables on North Korea-linked hackers trying to compromise key infrastructure. The Justice Department has unveiled an initiative to map the Joanap botnet and "further disrupt" it by alerting victims. The FBI and the Air Force Office of Special Investigations are running servers imitating peers on the botnet, giving them a peek at both technical and "limited" identifying info for other infected PCs. From there, they can map the botnet and send notifications through internet providers and foreign governments -- they'll even send personal notifications to people who don't have a router or firewall protecting their systems.

Source: Department of Justice

What Goals Are Right for Your AppSec Program?

Clear objectives and goals are key to success for any initiative, and AppSec is no exception. But many organizations struggle to establish application security goals, or focus on the wrong goals to the detriment of their program. Below we outline factors to consider when creating goals for your application security program.

Metrics

At a high level, the goals for your AppSec program should focus on a set of core metrics:

  • Fix rate: Your fix rate = fixed flaws divided by (fixed + open flaws).
  • Flaw density, for instance flaws per MB of code:  Flaw density —measured as the number of flaws divided by the size of the application —makes it easier to compare apples to apples across different teams or business units.
  • Applications compliant with your policy.

Additional factors

The above are only the core metrics; you might have more based on your business goals, such as developer education benchmarks, or the number of applications that have been assessed or retired, or the level of scan activity.

In addition, when developing the goals and policy for your application security program, you should always consider the following factors:

Types of apps and types of vulnerabilities

Not all apps are created equal, nor are all vulnerabilities. Make your AppSec goals more targeted and effective by focusing on certain applications and vulnerabilities. For instance, an application that has IP, is public facing and has third-party components may require all medium to very critical flaws to be fixed. A one-page temporary marketing site may only require high/very high flaws to be fixed.

In addition, don’t give every vulnerability the same level of attention. Rank vulnerabilities so that you are focused first and foremost on those that are actually increasing your risk. For instance, it’s important to distinguish between flaws that represent a remote risk and those that represent more substantial, real-world risks. In some cases, the likelihood of a vulnerability being exploited may be low, but the potential damage might be great. In other instances, the chance of exploit might be high, but the damage may not be substantial.

For example, when collecting data for our most recent State of Software Security report, we found code quality flaws in twice as many applications as SQL injection vulnerabilities. However, that does not mean they pose twice as much risk as SQLi to the state of software security. Probably quite the opposite. As a class, SQLi tends to present flaws of a much higher severity and exploitability than code quality vulnerabilities.

Security know-how of your team

If security is being introduced for the first time or being enforced for the first time, start off with some achievable policy standards. Don’t make a team that has never had security built into their daily cycle try to meet PCI or all OWASP requirements; they will not pass, feel defeated, and give up before they start.

Start with a simple policy: no high or very high critical flaws. Then get more stringent over time as developers adopt security into their daily routine.

Industry you’re in

Your industry might dictate the regulations you are subject to, and the type of testing you need to conduct and goals you need to meet. For instance, retail will be subject to PCI, and finance might need to comply with the NY DFS cybersecurity regulations.

To see how others in your industry are tackling application security, what vulnerabilities they are seeing most often, and where they are seeing success or falling short, check out our most recent State of Software Security report, which includes data from our platform broken out by industry.

Learn more

Get more details on setting realistic and effective goals for your application security program in Everything You Need to Know About Application Security Policies.

NBlog Jan 31 – why so many IT mistakes?


Well, here we are on the brink of another month-end, scrabbling around to finalize and deliver February's awareness module in time for, errr, February.  

This week we've completed the staff and management security awareness and training materials on "Mistakes", leaving just the professional stream to polish-off today ... and I'm having some last-minute fun finding notable IT mistakes to spice-up the professionals' briefings. 

No shortage there!

Being 'notable' implies we don't need to explain the incidents in any detail - a brief reminder will suffice with a few words of wisdom to highlight some relevant aspect of the awareness topic. Link them into a coherent story and the job's a good 'un.

The sheer number of significant IT mistakes constitutes an awareness message in its own right: how come the IT field appears so extraordinarily error-prone? Although we don't intend to explore that question in depth through the awareness materials, our cunning plan is that it should emerge from the content and leave the audience pondering, hopefully chatting about it. Is IT more complex than other fields, making it harder to get right? Are IT pro's unusually inept, slapdash and careless? What are the real root causes underlying IT's poor record? Does the blame lay elsewhere? Or is the assertion that IT has a poor record false, a mistake? 

The point of this ramble is that we've teased out something interesting and thought-provoking, directly relevant to the topic, contentious and hence stimulating. In awareness terms, that's a big win. Our job is nearly done. Just a few short hours to go now before the module is packaged and delivered, and the fun begins for our customers. 

How Malvertising Leads to Fake Flash Malware

It’s no secret that the pervasiveness of ad networks has greatly diminished the web browsing experience in recent years. With this has also come criminals and other miscreants who are using the drive for web advertising revenue to deliver malware.

Browser Extensions Can Pose Significant Cyber Security Threats

Malicious browser extensions can steal credentials, cryptocurrency, and more

From blocking ads and coin miners to saving news stories for later reading, browser extensions allow users to customize their web browsers for convenience, efficiency, and even privacy and security – usually for free. However, browser extensions need a wealth of access permissions to operate, including things like browsing history, website content, even login credentials. Because extensions aren’t applications in their own right – they run inside web browsers – antivirus software generally cannot detect malicious extensions. These innate vulnerabilities, along with their popularity, make browser extensions a very attractive target for cyber criminals, who attack on two fronts, by developing their own, malware-infested extensions or by hijacking legitimate extensions.

Browser Extensions Can Pose Significant Cyber Security Threats

Born to be bad: malicious browser extensions

Some extensions are designed to be malicious. Most of the time, they seek to steal login credentials and other sensitive information. For example, a Medium blogger recently reported on a malicious Google Chrome extension called “CCB Cash,” which purported to give users up to 5% cash back on all of their cryptocurrency transactions. In actuality, CCB Cash did nothing but steal login credentials and cryptocurrency. Google has since removed CCB Cash from its extension store, but not before the hackers behind it managed to make off with 23.23550279 BTC, or a little over $81 million.

Other malicious extensions install adware that redirects user searches to affiliate pages that the developers earn money from; a variant on this scheme replaces legitimate search engine ads with affiliate ads. Sometimes, extensions will redirect users to phishing sites or sites that contain drive-by downloads.

CCB Cash, with its outrageous promises of 5% cash back on practically everything, was an excellent example of the old adage, “If it sounds too good to be true, it probably is.” However, not all malicious browser extensions display obvious red flags. Just like malicious mobile phone apps, many of them disguise themselves as legitimate tools, such as a PDF reader or a VPN. The malicious extension may also impersonate a popular legitimate extension, even going so far as to stuff keywords so that their extension appears near the top of the browser’s extension store. Last year, over 20 million users installed phony ad blocker Chrome extensions before Google removed them.

Good extensions gone bad

Sometimes, hackers don’t bother coding their own extensions; they just hijack legitimate ones. There are several ways to accomplish this:

A new trojan called Razy, which spoofs searches to steal cryptocurrency, ups the ante by compromising the browser itself, installing malicious extensions, then infect already installed, legitimate extensions by disabling browser updates and extension integrity checks.

Protecting yourself from malicious extensions

There are a few ways to protect yourself from malicious browser extensions:

  • Only install extensions you actually need and will use.
  • Periodically review your installed extensions. Uninstall extensions that you no longer use or that you do not recognize.
  • Vet extensions before you install them. Visit the developer’s website. Read the description and the reviews. Beware if the description is riddled with spelling and grammar errors, or if the extension is relatively new but has a lot of reviews, every single one of them five-star and very similarly worded.

The cyber security experts at Continuum GRC have deep knowledge of the cyber security field, are continually monitoring the latest information security threats, and are committed to protecting your organization from security breaches. Continuum GRC offers full-service and in-house risk assessment and risk management subscriptions, and we help companies all around the world sustain proactive cyber security programs.

Continuum GRC is proactive cyber security®. Call 1-888-896-6207 to discuss your organization’s cyber security needs and find out how we can help your organization protect its systems and ensure compliance.

The post Browser Extensions Can Pose Significant Cyber Security Threats appeared first on .

OWASP: What Are the Top 10 Threats and Why Does It Matter?

Since the founding of the Open Web Application Security Project (OWASP) in 2001, it has become a leading resource for online security best practices. OWASP identifies itself as an open community dedicated to enabling organizations to develop and maintain applications and APIs that are protected from common threats and exploits.

In particular, they publish a list of the “10 Most Critical Web Application Security Risks,” which effectively serves as a de facto application security standard. The “Top 10” are the most critical risks to web application security, as selected by an international group of security experts. The free information lists several vulnerabilities that are easy to overlook, including insufficient attack protection in applications, cross-site request forgeries, broken access controls, under-protected APIs, and more.

Nearly every organization requires an online presence to conduct business, which means virtually every organization should be aware of web-based vulnerabilities and design a plan to address them. Understanding the OWASP Top 10 is the first step toward ensuring you won’t leave yourself vulnerable.

Top 10 web application threats to know

  1. Injection: Injection flaws such as SQL, NoSQL, OS, and LDAP injections can attack any source of data and involve attackers sending malicious data to a recipient. This is a very prevalent threat in legacy code and can result in data loss, corruption, access compromise, and complete host takeover. Using a safe database API, a database abstraction layer, or a parameterized database interface helps reduce the risk of injection threats.
  2. Broken Authentication: Incorrectly implemented session management or authentication gives attackers the ability to steal passwords, tokens, or impersonate user identities. This is widespread due to poorly implemented identity and access controls. Implementing multi-factor authentication and implementing weak-password checks is a great start to preventing this problem. However, don’t fall into the trap of enforcing composition rules on passwords (such as requiring uppercase, lowercase, numeric and special characters), as these have been to weaken rather than strengthen security.
  3. Sensitive Data Exposure:  When web applications and APIs aren’t properly protected, financial, healthcare, or other personally identifiable information (PII) data can be stolen or modified and then used for fraud, identity theft, or other criminal activities. Proper controls, encryption, removal of unnecessary data, and strong authentication can help to prevent exposure. 
  4. External Entities (XXE): Attackers can exploit vulnerable XML processors if they include malicious content in an XML document or exploit vulnerabilities. External entities can disclose internal files or be used to execute internal port scanning, remote code execution, and DDoS attacks. It is difficult to identify and eliminate XXE vulnerabilities, but a few easy improvements are patching all XML processors, ensuring comprehensive validation of XML input according to a schema, and limiting XML input where possible.
  5. Broken Access Control: This happens when policies on what users can access are loosely enforced. This results in attackers exploiting flaws to access data and functionality they are not authorized to access, such as accessing other users’ accounts, viewing sensitive files, modifying other users’ data, and changing access rights. It is suggested to use access control that is enforced in trusted server-side code, or even better, an external API gateway.
  6. Security Misconfiguration: Misconfigurations are the most common threat to organizations. This results from insecure or incomplete default configurations, open cloud storage, and verbose error messages. It is essential to securely configure and patch all operating systems, frameworks, libraries, and applications, and to follow best practices suggested by each hardware or software vendor to harden their systems.
  7. Cross-Site Scripting (XSS): These flaws occur when an application includes untrusted data in a web page. With XSS flaws, attackers can execute scripts in the victim’s browser, which can result in hijacked user sessions, defaced websites, or redirecting the user to a malicious site. In order to prevent XSS, you must separate untrusted data from active browser content, for example by using libraries that automatically escape user input.
  8. Insecure Deserialization: Insecure deserialization often leads to remote code execution scenarios. Even if remote code execution doesn’t happen, these flaws can be used to perform replay, injection, and privilege escalation attacks. One way to prevent this is not to accept serialized objects from untrusted sources. 
  9. Using Components with Known Vulnerabilities: Components include operating systems, web servers, web frameworks, encryption libraries, or other software modules. Applications and APIs using components with known vulnerabilities will undermine application protection measures and enable several types of attacks. A strong patch management measure largely prevents this problem.
  10. Insufficient Logging and Monitoring: Insufficient logging and monitoring can allow attackers to spread unchecked within an organization, maintain persistence, and extract or destroy data. This results in attackers having access for weeks, sometimes months. Using an effective monitoring and incident alerting solution can close the gap and spot attackers much quicker.

Keep in mind that these top 10 threats are just the most common of thousands of vulnerabilities that cyber criminals can exploit. Many people overlook web applications when they plan their security, or they falsely assume web applications are protected by their network firewall. In fact, the web application threat vector is one of the most successfully exploited because of these misunderstandings. 

The best way to defend this threat vector is with a web application firewall (WAF) that is purpose-built to secure your web applications. These firewalls provide several types of Layer 7 security, including DDoS protection, server cloaking, web scraping protection, data loss prevention, web-based identity and access management, and more.  Including a web application firewall in an organization’s security strategy and technology stack will ensure protection from these top threats and the many other threats specifically targeting your applications.

About the AuthorNitzan Miron is VP of product management and application security at Barracuda Networks

Copyright 2010 Respective Author at Infosec Island

SN 699: Browser Extension Security

  • The expressive power of the social media friends we keep
  • The persistent DNS hijacking campaign which has the US Government quite concerned
  • Last week's iOS and macOS updates (and doubtless another one very soon!)
  • A valiant effort to take down malware distribution domains
  • Chrome catching up to IE and Firefox with drive-by file downloads
  • Two particularly worrisome vulnerabilities in two Cisco router models publicly disclosed last Friday
  • The state of the industry and the consequences of extensions to our web browsers.

We invite you to read our show notes.

Hosts: Steve Gibson and Leo Laporte

Download or subscribe to this show at https://twit.tv/shows/security-now.

You can submit a question to Security Now! at the GRC Feedback Page.

For 16kbps versions, transcripts, and notes (including fixes), visit Steve's site: grc.com, also the home of the best disk maintenance and recovery utility ever written Spinrite 6.

Sponsors:

How Trump’s government shutdown ground transparency to a halt

redactions
WikiMedia Commons

During President Trump’s shutdown of parts of the federal government over a border wall, transparency was declared a nonessential part of the government’s operations—with very real effects on the public’s right to know.

Last Thursday—the day before Congress signed a short-term bill ending the 35-day partial government shutdown—I spent several hours contacting numerous federal agencies’ public information and FOIA offices, in an attempt to understand which were accepting and processing requests, which were accepting new requests but not processing them, and those that were doing neither.

Considering the Department of the Interior had shut down its online FOIA request portal altogether due to the shutdown, it might seem like a reasonable assumption that no federal agencies would be processing FOIAs due to the lapse of funding. An Interior Department spokesperson noted that "FOIA requests are not directly related to protecting life and imminent threats to property."

But at least some federal FOIA requests were still slowly moving along—security researcher Trammell Hudson received an update on a request with the FBI on Jan. 9, in the midst of the shutdown. And the Justice Department's FOIA handbook clearly states that FOIA officers were still obligated to fill requests within the appropriate timeline—even in situations of government shutdowns—so I wanted to clarify.

The FBI employee that answered my phone call told me that questions about FOIA processing would be a question for FBI headquarters. But upon calling FBI headquarters, there was no one to answer the phone. I called a dozen different CBP and ICE public affairs officers, and never once reached a human. Some voice mail boxes said that they were unable to return messages due to the furlough.

On Jan. 28, days after making the calls, a spokesperson for the Environmental Protection Agency finally sent me an email: “EPA will start processing FOIA requests now that the shutdown has ended.”

I was, for every agency I contacted, unable to confirm or deny whether it was processing FOIA requests at all. (I suppose I was GLOMAR’ed yet again.)

“There is no basis for government to blow its deadlines just because of shutdown,” said Adam Marshall, an attorney at the Reporters Committee for Freedom of the Press. “But as a practical matter, that’s exactly what we’re seeing. Even under the guidance of previous shutdowns, agencies have to count the days that they took to respond. And this is really damaging the public's right to access government records.”

According to transparency activist and technologist Freddy Martinez (who was previously a fellow at Freedom of the Press Foundation), it’s important that government agencies count the number of days that a FOIA request goes untouched. If an agency does not respond at all to a request within 20 days, the requester has the right to sue.

“Making an exception to the deadlines the government has in place, for a government shutdown, would weaken FOIA and interfere with people’s right to sue. The government has seen those deadlines as necessary, considering they are the law,” he said.

Litigation over FOIA requests is not only more common than ever, but some also argue that lawsuits are increasingly necessary to obtain government records in the face of pervasive bureaucratic resistance.

Adam Marshall noted that during the shutdown, all of his federal FOIA litigation literally ground to a halt. “It’s disturbing because by the time a requester is at the point of litigation, people have invested time and money into an attorney and lawsuit.”

Another prominent FOIA attorney corroborated that his federal cases were also delayed. “I can tell you all of my FOIA litigation is on hold,” said the Electronic Frontier Foundation’s David Sobel. “The Justice Department has moved for stays in all of my pending cases and my understanding from other attorneys is that that’s pretty much across the board—because the assistant US attorneys who represent in these cases are furloughed.”

Kevin Goldberg—a First Amendment and FOIA attorney—said he would imagine that under any administration, FOIA would be declared a nonessential function of the federal government.

“I don’t think that’s a change, and I’m not sure it’s sort of specific action on [Trump’s] part to shut things down,” he said.

This most recent partial shutdown was certainly not the first of its kind, and in previous shutdowns, such as one in October 2013 during the Obama administration, FOIA operations similarly shutodwn. But this most recent instance—which stretched from December into late January—was the longest in history. Whether the effects of the shutdown on FOIA were intentional or not, they were very real.

“If government shutdowns become more frequent, and particularly if they become longer and longer, I could see them start to affect the process and usefulness of filing FOIA requests—including the litigation process—in really troubling way,” said Martinez.

Michael Morisy, cofounder of MuckRock, noted that FOIA offices were already stretched to the breaking point before Trump’s hiring freeze and certainly before the partial shutdown. “Telling FOIA officers that they’re without pay, when they weren’t paid well in the first place?”

Fixing Virtualbox RDP Server with DetectionLab

Yesterday I posted about DetectionLab, but noted that I was having trouble with the RDP servers offered by Virtualbox. If you remember, DetectionLab builds four virtual machines:

root@LAPTOP-HT4TGVCP C:\Users\root>"c:\Program Files\Oracle\VirtualBox\VBoxManage" list runningvms
"logger" {3da9fffb-4b02-4e57-a592-dd2322f14245}
"dc.windomain.local" {ef32d493-845c-45dc-aff7-3a86d9c590cd}
"wef.windomain.local" {7cd008b7-c6e0-421d-9655-8f92ec98d9d7}
"win10.windomain.local" {acf413fb-6358-44df-ab9f-cc7767ed32bd}

I was having a problem with two of the VMs sharing the same port for the RDP server offered by Virtualbox. This meant I could not access one of them. (Below, port 5932 has the conflict.)

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>"c:\Program Files\Oracle\VirtualBox\VBoxManage" showvminfo logger | findstr /I vrde | findstr /I address
VRDE:                        enabled (Address 0.0.0.0, Ports 5955, MultiConn: off, ReuseSingleConn: off, Authentication type: null)
VRDE property               : TCP/Address  = "0.0.0.0"

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>"c:\Program Files\Oracle\VirtualBox\VBoxManage" showvminfo dc.windomain.local | findstr /I vrde | findstr /I address
VRDE:                        enabled (Address 0.0.0.0, Ports 5932, MultiConn: off, ReuseSingleConn: off, Authentication type: null)
VRDE property               : TCP/Address = "0.0.0.0"

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>"c:\Program Files\Oracle\VirtualBox\VBoxManage" showvminfo wef.windomain.local | findstr /I vrde | findstr /I address
VRDE:                        enabled (Address 0.0.0.0, Ports 5932, MultiConn: off, ReuseSingleConn: off, Authentication type: null)
VRDE property               : TCP/Address = "0.0.0.0"

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>"c:\Program Files\Oracle\VirtualBox\VBoxManage" showvminfo win10.windomain.local | findstr /I vrde | findstr /I address
VRDE:                        enabled (Address 0.0.0.0, Ports 5981, MultiConn: off, ReuseSingleConn: off, Authentication type: null)
VRDE property               : TCP/Address = "0.0.0.0"

To fix this, I explicitly added port values to the configuration in the Vagrantfile. Here is one example:

      vb.customize ["modifyvm", :id, "--vrde", "on"]
      vb.customize ["modifyvm", :id, "--vrdeaddress", "0.0.0.0"]
      vb.customize ["modifyvm", :id, "--vrdeport", "60101"]

After a 'vagrant reload', the RDP servers were now listening on new ports, as I hoped.

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>"c:\Program Files\Oracle\VirtualBox\VBoxManage" showvminfo logger | findstr /I vrde | findstr /I address
VRDE:                        enabled (Address 0.0.0.0, Ports 60101, MultiConn: off, ReuseSingleConn: off, Authentication type: null)
VRDE property               : TCP/Address  = "0.0.0.0"

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>"c:\Program Files\Oracle\VirtualBox\VBoxManage" showvminfo dc.windomain.local | findstr /I vrde | findstr /I address
VRDE:                        enabled (Address 0.0.0.0, Ports 60102, MultiConn: off, ReuseSingleConn: off, Authentication type: null)
VRDE property               : TCP/Address = "0.0.0.0"

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>"c:\Program Files\Oracle\VirtualBox\VBoxManage" showvminfo wef.windomain.local | findstr /I vrde | findstr /I address
VRDE:                        enabled (Address 0.0.0.0, Ports 60103, MultiConn: off, ReuseSingleConn: off, Authentication type: null)
VRDE property               : TCP/Address = "0.0.0.0"

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>"c:\Program Files\Oracle\VirtualBox\VBoxManage" showvminfo win10.windomain.local | findstr /I vrde | findstr /I address
VRDE:                        enabled (Address 0.0.0.0, Ports 60104, MultiConn: off, ReuseSingleConn: off, Authentication type: null)
VRDE property               : TCP/Address = "0.0.0.0"

This is great, but I am still encountering a problem with avoiding port collisions when Vagrant remaps ports for services on the VMs.

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>vagrant status
Current machine states:

logger                    running (virtualbox)
dc                        running (virtualbox)
wef                       running (virtualbox)
win10                     running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>vagrant port logger
The forwarded ports for the machine are listed below. Please note that
these values may differ from values configured in the Vagrantfile if the
provider supports automatic port collision detection and resolution.

    22 (guest) => 2222 (host)

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>vagrant port dc
The forwarded ports for the machine are listed below. Please note that
these values may differ from values configured in the Vagrantfile if the
provider supports automatic port collision detection and resolution.

  3389 (guest) => 3389 (host)
    22 (guest) => 2200 (host)
  5985 (guest) => 55985 (host)
  5986 (guest) => 55986 (host)

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>vagrant port wef
The forwarded ports for the machine are listed below. Please note that
these values may differ from values configured in the Vagrantfile if the
provider supports automatic port collision detection and resolution.

  3389 (guest) => 2201 (host)
    22 (guest) => 2202 (host)
  5985 (guest) => 2203 (host)
  5986 (guest) => 2204 (host)

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>vagrant port win10
The forwarded ports for the machine are listed below. Please note that
these values may differ from values configured in the Vagrantfile if the
provider supports automatic port collision detection and resolution.

  3389 (guest) => 2205 (host)
    22 (guest) => 2206 (host)
  5985 (guest) => 2207 (host)
  5986 (guest) => 2208 (host)

The entry in bold is the problem. Vagrant should not be mapping port 3389, which is already in use by the RDP server on the Windows 10 host, such that it tries to be available to the guest.

I tried telling Vagrant by hand in the Vagrantfile to map port 3389 elsewhere, but nothing worked. (I tried entries like the following.)

    config.vm.network :forwarded_port, guest: 3389, host: 5789

I also searched to see if there might be a configuration outside the Vagrantfile that I was missing. Here is what I found:

ds61@ds61:~/DetectionLab-master$ find . | xargs grep "3389" *
./Terraform/Method1/main.tf:    from_port   = 3389
./Terraform/Method1/main.tf:    to_port     = 3389
./Packer/vagrantfile-windows_2016.template:    config.vm.network :forwarded_port, guest: 3389, host: 3389, id: "rdp", auto_correct: true
./Packer/scripts/enable-rdp.bat:netsh advfirewall firewall add rule name="Open Port 3389" dir=in action=allow protocol=TCP localport=3389
./Packer/vagrantfile-windows_10.template:    config.vm.network :forwarded_port, guest: 3389, host: 3389, id: "rdp", auto_correct: true

I wonder if those Packer templates have anything to do with it, or if I am encountering a problem with Vagrant? I have seen many people experience similar issues, so I don't know.

It's not a big deal, though. Now that I can directly access the virtual screens for each VM on Virtualbox via the RDP server, I don't need to RDP to port 3389 on each Windows VM in order to interact with it.

If anyone has any ideas, though, I'm interested!

Apple Users: Here’s What to Do About the Major FaceTime Bug

FaceTime is a popular way for people of all ages to connect with long-distance loved ones. The feature permits Apple users to video chat with other device owners from essentially anywhere at any time. And now, a bug in the software takes that connection a step further – as it permits users calling via FaceTime to hear the audio coming from the recipient’s phone, even before they’ve accepted or denied the call.

Let’s start with how the eavesdropping bug actually works. First, a user would have to start a FaceTime video call with an iPhone contact and while the call is dialing, they must swipe up from the bottom of the screen and tap “Add Person.” Then, they can add their own phone number to the “Add Person” screen. From there, the user can start a group FaceTime call between themselves and the original person dialed, even if that person hasn’t accepted the call. What’s more – if the user presses the volume up or down, the victim’s front-face camera is exposed too.

This bug acts as a reminder that these days your smartphone is just as data rich as your computer. So, as we adopt new technology into our everyday lives, we all must consider how these emerging technology trends could create security risks if we don’t take steps to protect our data.

Therefore, it’s crucial all iOS users that are running iOS 12.1 or later take the right steps now to protect their device and their data. If you’re an Apple user affected by this bug, be sure to follow these helpful security steps:

  • Update, update, update. Speaking of fixes – patches for bugs are included in software updates that come from the provider. Therefore, make sure you always update your device as soon as one is available. Apple has already confirmed that a fix is underway as we speak.
  • Be sure to disable FaceTime in iOS settings now. Until this bug is fixed, it is best to just disable the feature entirely to be sure no one is listening in on you. When a fix does emerge from Apple, you can look into enabling the service again.
  • Apply additional security to your phone. Though the bug will hopefully be patched within the next software update, it doesn’t hurt to always cover your device with an extra layer of security. To protect your phone from any additional mobile threats coming its way, be sure to use a security solution such as McAfee Mobile Security.

And, of course, to stay on top of the latest consumer and mobile security threats, be sure to follow @McAfee_Home on Twitter, listen to our podcast Hackable? and ‘Like’ us on Facebook.

The post Apple Users: Here’s What to Do About the Major FaceTime Bug appeared first on McAfee Blogs.

voucher_swap: Exploiting MIG reference counting in iOS 12

Posted by Brandon Azad, Project Zero

In this post I'll describe how I discovered and exploited CVE-2019-6225, a MIG reference counting vulnerability in XNU's task_swap_mach_voucher() function. We'll see how to exploit this bug on iOS 12.1.2 to build a fake kernel task port, giving us the ability to read and write arbitrary kernel memory. (This bug was independently discovered by @S0rryMybad.) In a later post, we'll look at how to use this bug as a starting point to analyze and bypass Apple's implementation of ARMv8.3 Pointer Authentication (PAC) on A12 devices like the iPhone XS.

A curious discovery

MIG is a tool that generates Mach message parsing code, and vulnerabilities resulting from violating MIG semantics are nothing new: for example, Ian Beer's async_wake exploited an issue where IOSurfaceRootUserClient would over-deallocate a Mach port managed by MIG semantics on iOS 11.1.2.

Most prior MIG-related issues have been the result of MIG service routines not obeying semantics around object lifetimes and ownership. Usually, the MIG ownership rules are expressed as follows:

  1. If a MIG service routine returns success, then it took ownership of all resources passed in.
  2. If a MIG service routine returns failure, then it took ownership of none of the resources passed in.

Unfortunately, as we'll see, this description doesn't cover the full complexity of kernel objects managed by MIG, which can lead to unexpected bugs.

The journey started while investigating a reference count overflow in semaphore_destroy(), in which an error path through the function left the semaphore_t object with an additional reference. While looking at the autogenerated MIG function _Xsemaphore_destroy() that wraps semaphore_destroy(), I noticed that this function seems to obey non-conventional semantics.

Here's the relevant code from _Xsemaphore_destroy():

    task = convert_port_to_task(In0P->Head.msgh_request_port);

    OutP->RetCode = semaphore_destroy(task,
            convert_port_to_semaphore(In0P->semaphore.name));
    task_deallocate(task);
#if __MigKernelSpecificCode
    if (OutP->RetCode != KERN_SUCCESS) {
        MIG_RETURN_ERROR(OutP, OutP->RetCode);
    }

    if (IP_VALID((ipc_port_t)In0P->semaphore.name))
        ipc_port_release_send((ipc_port_t)In0P->semaphore.name);
#endif /* __MigKernelSpecificCode */

The function convert_port_to_semaphore() takes a Mach port and produces a reference on the underlying semaphore object without consuming the reference on the port. If we assume that a correct implementation of the above code doesn't leak or consume extra references, then we can conclude the following intended semantics for semaphore_destroy():

  1. On success, semaphore_destroy() should consume the semaphore reference.
  2. On failure, semaphore_destroy() should still consume the semaphore reference.

Thus, semaphore_destroy() doesn't seem to follow the traditional rules of MIG semantics: a correct implementation always takes ownership of the semaphore object, regardless of whether the service routine returns success or failure.

This of course begs the question: what are the full rules governing MIG semantics? And are there any instances of code violating these other MIG rules?

A bad swap

Not long into my investigation into extended MIG semantics, I discovered the function task_swap_mach_voucher(). This is the MIG definition from osfmk/mach/task.defs:

routine task_swap_mach_voucher(
                task            : task_t;
                new_voucher     : ipc_voucher_t;
        inout   old_voucher     : ipc_voucher_t);

And here's the relevant code from _Xtask_swap_mach_voucher(), the autogenerated MIG wrapper:

mig_internal novalue _Xtask_swap_mach_voucher
       (mach_msg_header_t *InHeadP, mach_msg_header_t *OutHeadP)
{
...
   kern_return_t RetCode;
   task_t task;
   ipc_voucher_t new_voucher;
   ipc_voucher_t old_voucher;
...
   task = convert_port_to_task(In0P->Head.msgh_request_port);

   new_voucher = convert_port_to_voucher(In0P->new_voucher.name);

   old_voucher = convert_port_to_voucher(In0P->old_voucher.name);

   RetCode = task_swap_mach_voucher(task, new_voucher, &old_voucher);

   ipc_voucher_release(new_voucher);

   task_deallocate(task);

   if (RetCode != KERN_SUCCESS) {
       MIG_RETURN_ERROR(OutP, RetCode);
   }
...
   if (IP_VALID((ipc_port_t)In0P->old_voucher.name))
       ipc_port_release_send((ipc_port_t)In0P->old_voucher.name);

   if (IP_VALID((ipc_port_t)In0P->new_voucher.name))
       ipc_port_release_send((ipc_port_t)In0P->new_voucher.name);
...
   OutP->old_voucher.name = (mach_port_t)convert_voucher_to_port(old_voucher);

   OutP->Head.msgh_bits |= MACH_MSGH_BITS_COMPLEX;
   OutP->Head.msgh_size = (mach_msg_size_t)(sizeof(Reply));
   OutP->msgh_body.msgh_descriptor_count = 1;
}

Once again, assuming that a correct implementation doesn't leak or consume extra references, we can infer the following intended semantics for task_swap_mach_voucher():

  1. task_swap_mach_voucher() does not hold a reference on new_voucher; the new_voucher reference is borrowed and should not be consumed.
  2. task_swap_mach_voucher() holds a reference on the input value of old_voucher that it should consume.
  3. On failure, the output value of old_voucher should not hold any references on the pointed-to voucher object.
  4. On success, the output value of old_voucher holds a voucher reference donated from task_swap_mach_voucher() to _Xtask_swap_mach_voucher() that the latter consumes via convert_voucher_to_port().

With these semantics in mind, we can compare against the actual implementation. Here's the code from XNU 4903.221.2's osfmk/kern/task.c, presumably a placeholder implementation:

kern_return_t
task_swap_mach_voucher(
       task_t          task,
       ipc_voucher_t   new_voucher,
       ipc_voucher_t   *in_out_old_voucher)
{
   if (TASK_NULL == task)
       return KERN_INVALID_TASK;

   *in_out_old_voucher = new_voucher;
   return KERN_SUCCESS;
}

This implementation does not respect the intended semantics:

  1. The input value of in_out_old_voucher is a voucher reference owned by task_swap_mach_voucher(). By unconditionally overwriting it without first calling ipc_voucher_release(), task_swap_mach_voucher() leaks a voucher reference.
  2. The value new_voucher is not owned by task_swap_mach_voucher(), and yet it is being returned in the output value of in_out_old_voucher. This consumes a voucher reference that task_swap_mach_voucher() does not own.

Thus, task_swap_mach_voucher() actually contains two reference counting issues! We can leak a reference on a voucher by calling task_swap_mach_voucher() with the voucher as the third argument, and we can drop a reference on the voucher by passing the voucher as the second argument. This is a great exploitation primitive, since it offers us nearly complete control over the voucher object's reference count.

(Further investigation revealed that thread_swap_mach_voucher() contained a similar vulnerability, but only the reference leak part, and changes in iOS 12 made the vulnerability unexploitable.)

On vouchers

In order to grasp the impact of this vulnerability, it's helpful to understand a bit more about Mach vouchers, although the full details aren't important for exploitation.

Mach vouchers are represented by the type ipc_voucher_t in the kernel, with the following structure definition:

/*
* IPC Voucher
*
* Vouchers are a reference counted immutable (once-created) set of
* indexes to particular resource manager attribute values
* (which themselves are reference counted).
*/
struct ipc_voucher {
   iv_index_t      iv_hash;        /* checksum hash */
   iv_index_t      iv_sum;         /* checksum of values */
   os_refcnt_t     iv_refs;        /* reference count */
   iv_index_t      iv_table_size;  /* size of the voucher table */
   iv_index_t      iv_inline_table[IV_ENTRIES_INLINE];
   iv_entry_t      iv_table;       /* table of voucher attr entries */
   ipc_port_t      iv_port;        /* port representing the voucher */
   queue_chain_t   iv_hash_link;   /* link on hash chain */
};

As the comment indicates, an IPC voucher represents a set of arbitrary attributes that can be passed between processes via a send right in a Mach message. The primary client of Mach vouchers appears to be Apple's libdispatch library.

The only fields of ipc_voucher relevant to us are iv_refs and iv_port. The other fields are related to managing the global list of voucher objects and storing the attributes represented by a voucher, neither of which will be used in the exploit.

As of iOS 12, iv_refs is of type os_refcnt_t, which is a 32-bit reference count with allowed values in the range 1-0x0fffffff (that's 7 f's, not 8). Trying to retain or release a voucher with a reference count outside this range will trigger a panic.

iv_port is a pointer to the ipc_port object that represents this voucher to userspace. It gets initialized whenever convert_voucher_to_port() is called on an ipc_voucher with iv_port set to NULL.

In order to create a Mach voucher, you can call the host_create_mach_voucher() trap. This function takes a "recipe" describing the voucher's attributes and returns a voucher port representing the voucher. However, because vouchers are immutable, there is one quirk: if the resulting voucher's attributes are exactly the same as a voucher that already exists, then host_create_mach_voucher() will simply return a reference to the existing voucher rather than creating a new one.

That's out of line!

There are many different ways to exploit this bug, but in this post I'll discuss my favorite: incrementing an out-of-line Mach port pointer so that it points into pipe buffers.

Now that we understand what the vulnerability is, it's time to determine what we can do with it. As you'd expect, an ipc_voucher gets deallocated once its reference count drops to 0. Thus, we can use our vulnerability to cause the voucher to be unexpectedly freed.

But freeing the voucher is only useful if the freed voucher is subsequently reused in an interesting way. There are three components to this: storing a pointer to the freed voucher, reallocating the freed voucher with something useful, and reusing the stored voucher pointer to modify kernel state. If we can't get any one of these steps to work, then the whole bug is pretty much useless.

Let's consider the first step, storing a pointer to the voucher. There are a few places in the kernel that directly or indirectly store voucher pointers, including struct ipc_kmsg's ikm_voucher field and struct thread's ith_voucher field. Of these, the easiest to use is ith_voucher, since we can directly read and write this field's value from userspace by calling thread_get_mach_voucher() and thread_set_mach_voucher(). Thus, we can make ith_voucher point to a freed voucher by first calling thread_set_mach_voucher() to store a reference to the voucher, then using our voucher bug to remove the added reference, and finally deallocating the voucher port in userspace to free the voucher.

Next consider how to reallocate the voucher with something useful. ipc_voucher objects live in their own zalloc zone, ipc.vouchers, so we could easily get our freed voucher reallocated with another voucher object. Reallocating with any other type of object, however, would require us to force the kernel to perform zone garbage collection and move a page containing only freed vouchers over to another zone. Unfortunately, vouchers don't seem to store any significant privilege-relevant attributes, so reallocating our freed voucher with another voucher probably isn't helpful. That means we'll have to perform zone gc and reallocate the voucher with another type of object.

In order to figure out what type of object we should reallocate with, it's helpful to first examine how we will use the dangling voucher pointer in the thread's ith_voucher field. We have a few options, but the easiest is to call thread_get_mach_voucher() to create or return a voucher port for the freed voucher. This will invoke ipc_voucher_reference() and convert_voucher_to_port() on the freed ipc_voucher object, so we'll need to ensure that both iv_refs and iv_port are valid.

But what makes thread_get_mach_voucher() so useful for exploitation is that it returns the voucher's Mach port back to userspace. There are two ways we could leverage this. If the freed ipc_voucher object's iv_port field is non-NULL, then that pointer gets directly interpreted as an ipc_port pointer and thread_get_mach_voucher() returns it to us as a Mach send right. On the other hand, if iv_port is NULL, then convert_voucher_to_port() will return a freshly allocated voucher port that allows us to continue manipulating the freed voucher's reference count from userspace.

This brought me to the idea of reallocating the voucher using out-of-line ports. One way to send a large number of Mach port rights in a message is to list the ports in an out-of-line ports descriptor. When the kernel copies in an out-of-line ports descriptor, it allocates an array to store the list of ipc_port pointers. By sending many Mach messages containing out-of-line ports descriptors, we can reliably reallocate the freed ipc_voucher with an array of out-of-line Mach port pointers.

Since we can control which elements in the array are valid ports and which are MACH_PORT_NULL, we can ensure that we overwrite the voucher's iv_port field with NULL. That way, when we call thread_get_mach_voucher() in userspace, convert_voucher_to_port() will allocate a fresh voucher port that points to the overlapping voucher. Then we can use the reference counting bug again on the returned voucher port to modify the freed voucher's iv_refs field, which will change the value of the out-of-line port pointer that overlaps iv_refs by any amount we want.

Of course, we haven't yet addressed the question of ensuring that the iv_refs field is valid to begin with. As previously mentioned, iv_refs must be in the range 1-0x0fffffff if we want to reuse the freed ipc_voucher without triggering a kernel panic.

The ipc_voucher structure is 0x50 bytes and the iv_refs field is at offset 0x8; since the iPhone is little-endian, this means that if we reallocate the freed voucher with an array of out-of-line ports, iv_refs will always overlap with the lower 32 bits of an ipc_port pointer. Let's call the Mach port that overlaps iv_refs the base port. Using either MACH_PORT_NULL or MACH_PORT_DEAD as the base port would result in iv_refs being either 0 or 0xffffffff, both of which are invalid. Thus, the only remaining option is to use a real Mach port as the base port, so that iv_refs is overwritten with the lower 32 bits of a real ipc_port pointer.

This is dangerous because if the lower 32 bits of the base port's address are 0 or greater than 0x0fffffff, accessing the freed voucher will panic. Fortunately, kernel heap allocation on recent iOS devices is pretty well behaved: zalloc pages will be allocated from the range 0xffffffe0xxxxxxxx starting from low addresses, so as long as the heap hasn't become too unruly since the system booted (e.g. because of a heap groom or lots of activity), we can be reasonably sure that the lower 32 bits of the base port's address will lie within the required range. Hence overlapping iv_refs with an out-of-line Mach port pointer will almost certainly work fine if the exploit is run after a fresh boot.

This gives us our working strategy to exploit this bug:

  1. Allocate a page of Mach vouchers.
  2. Store a pointer to the target voucher in the thread's ith_voucher field and drop the added reference using the vulnerability.
  3. Deallocate the voucher ports, freeing all the vouchers.
  4. Force zone gc and reallocate the page of freed vouchers with an array of out-of-line ports. Overlap the target voucher's iv_refs field with the lower 32 bits of a pointer to the base port and overlap the voucher's iv_port field with NULL.
  5. Call thread_get_mach_voucher() to retrieve a voucher port for the voucher overlapping the out-of-line ports.
  6. Use the vulnerability again to modify the overlapping voucher's iv_refs field, which changes the out-of-line base port pointer so that it points somewhere else instead.
  7. Once we receive the Mach message containing the out-of-line ports, we get a send right to arbitrary memory interpreted as an ipc_port.

Pipe dreams

So what should we get a send right to? Ideally we'd be able to fully control the contents of the fake ipc_port we receive without having to play risky games by deallocating and then reallocating the memory backing the fake port.

Ian actually came up with a great technique for this in his multi_path and empty_list exploits using pipe buffers. Our exploit so far allows us to modify an out-of-line pointer to the base port so that it points somewhere else. So, if the original base port lies directly in front of a bunch of pipe buffers in kernel memory, then we can leak voucher references to increment the base port pointer in the out-of-line ports array so that it points into the pipe buffers instead.

At this point, we can receive the message containing the out-of-line ports back in userspace. This message will contain a send right to an ipc_port that overlaps one of our pipe buffers, so we can directly read and write the contents of the fake ipc_port's memory by reading and writing the overlapping pipe's file descriptors.

tfp0

Once we have a send right to a completely controllable ipc_port object, exploitation is basically deterministic.

We can build a basic kernel memory read primitive using the same old pid_for_task() trick: convert our port into a fake task port such that the fake task's bsd_info field (which is a pointer to a proc struct) points to the memory we want to read, and then call pid_for_task() to read the 4 bytes overlapping bsd_info->p_pid. Unfortunately, there's a small catch: we don't know the address of our pipe buffer in kernel memory, so we don't know where to make our fake task port's ip_kobject field point.

We can get around this by instead placing our fake task struct in a Mach message that we send to the fake port, after which we can read the pipe buffer overlapping the port and get the address of the message containing our fake task from the port's ip_messages.imq_messages field. Once we know the address of the ipc_kmsg containing our fake task, we can overwrite the contents of the fake port to turn it into a task port pointing to the fake task, and then call pid_for_task() on the fake task port as usual to read 4 bytes of arbitrary kernel memory.

An unfortunate consequence of this approach is that it leaks one ipc_kmsg struct for each 4-byte read. Thus, we'll want to build a better read primitive as quickly as possible and then free all the leaked messages.

In order to get the address of the pipe buffer we can leverage the fact that it resides at a known offset from the address of the base port. We can call mach_port_request_notification() on the fake port to add a request that the base port be notified once the fake port becomes a dead name. This causes the fake port's ip_requests field to point to a freshly allocated array containing a pointer to the base port, which means we can use our memory read primitive to read out the address of the base port and compute the address of the pipe buffer.

At this point we can build a fake kernel task inside the pipe buffer, giving us full kernel read/write. Next we allocate kernel memory with mach_vm_allocate(), write a new fake kernel task inside that memory, and then modify the fake port pointer in our process's ipc_entry table to point to the new kernel task instead. Finally, once we have our new kernel task port, we can clean up all the leaked memory.

And that's the complete exploit! You can find exploit code for the iPhone XS, iPhone XR, and iPhone 8 here: voucher_swap. A more in-depth, step-by-step technical analysis of the exploit technique is available in the source code.

Bug collision

I reported this vulnerability to Apple on December 6, 2018, and by December 19th Apple had already released iOS 12.1.3 beta build 16D5032a which fixed the issue. Since this would be an incredibly quick turnaround for Apple, I suspected that this bug was found and reported by some other party first.

I subsequently learned that this bug was independently discovered and exploited by Qixun Zhao (@S0rryMybad) of Qihoo 360 Vulcan Team. Amusingly, we were both led to this bug through semaphore_destroy(); thus, I wouldn't be surprised to learn that this bug was broadly known before being fixed. SorryMybad used this vulnerability as part of a remote jailbreak for the Tianfu Cup; you can read about his strategy for obtaining tfp0.

Conclusion

This post looked at the discovery and exploitation of P0 issue 1731, an IPC voucher reference counting issue rooted in failing to follow MIG semantics for inout objects. When run a few seconds after a fresh boot, the exploit strategy discussed here is quite reliable: on the devices I've tested, the exploit succeeds upwards of 99% of the time. The exploit is also straightforward enough that, when successful, it allows us to clean up all leaked resources and leave the system in a completely stable state.

In a way, it's surprising that such "easy" vulnerabilities still exist: after all, XNU is open source and heavily scrutinized for valuable bugs like this. However, MIG semantics are very unintuitive and don't align well with the natural patterns for writing secure kernel code. While I'd love to believe that this is the last major MIG bug, I wouldn't be surprised to see at least a few more crop up.

This bug is also a good reminder that placeholder code can also introduce security vulnerabilities and should be scrutinized as tightly as functional code, no matter how simple it may seem.

And finally, it's worth noting that the biggest headache for me while exploiting this bug, the limited range of allowed reference count values, wasn't even an issue on iOS versions prior to 12. On earlier platforms, this bug would have always been incredibly reliable, not just directly after a clean boot. Thus, it's good to see that even though os_refcnt_t didn't stop this bug from being exploited, the mitigation at least impacts exploit reliability, and probably decreases the value of bugs like this to attackers.

My next post will show how to use this exploit to analyze Apple's implementation of Pointer Authentication, culminating in a technique that allows us to forge PACs for pointers signed with the A keys. This is sufficient to call arbitrary kernel functions or execute arbitrary code in the kernel via JOP.

Money Laundering and Counter-Terrorist Financing: What is FATF?

Many cybercrime investigators seem narrowly focused on the bits and bytes of the crimes they investigate while not truly understanding or interacting with those who focus on where the money goes.  As we've been expanding our horizons, I've learned quite a bit and wanted to share some resources for others who may have been similarly limited in their focus.

The Financial Action Task Force (FATF) was established in 1989. It built a list of Forty Recommendations for countries to address Money Laundering, which were first issued in 1990, and revised in 1996, 2001, 2003, and 2012.  Their latest FATF Annual Report (2017-2018) addresses Terrorist financing as well as new methods and trends and announces a research project on financing of recruitment for terrorism.  Many of these Recommendations meet our lives in the form of regulations on financial institutions and interactions between international law enforcement agencies.
"Regardless of their size and complexity, the financial activities and channels of terrorists are an essential source of intelligence.  Financial investigation can identify terrorist cells, their associates and facilitators, and reveal the structure of terrorist groups, and their logistics and facilitation networks." -- FATF President Santiago Otamendi, 14DEC2017, NYC.
FATF also released an important report "Financing of Recruitment for Terrorist Purposes" in January 2018, and a second report "Concealment of Beneficial Ownership" in July 2018.
Beneficial Ownership (July 2018)
Terrorist Recruitment (January 2018)
FATF is composed of 38 member states, covering most of the major financial centers of the world. Each of these member states has pledged to come into compliance with the Forty Recommendations, and to measure its progress.

The FATF Forty Recommendations on Money Laundering and Counter Terrorism Finance

International Standards on Combating Money Launderingand the Financing of Terrorism& Proliferation (Oct 2018)
The Recommendations fall into seven major categories:

A - AML/CFT Policies and Coordination
  • R1. Asessing risks & applying a risk-based approach
  • R2. National cooperation and coordination


B - Money Laundering and Confiscation

  • R3. Money laundering offense 
  • R4. Confiscation and provisional measures


C - Terrorist Financing and Financing of Proliferation

  • R5. Terrorist financing offense
  • R6. Targeted financial sanctions related to terrorism and terrorist financing
  • R7. Targeted financial sanctions related to proliferation 
  • R8. Non-profit organizations


D - Preventative Measures

  • R9. Financial institution secrecy laws
  • R10. Customer due diligence 
  • R11. Record keeping 
  • R12. Politically exposed persons
  • R13. Correspondent banking
  • R14. Money or Value transfer services
  • R15. New technologies
  • R16. Wire transfers 
  • R17. Reliance on third parties 
  • R18. Internal controls and foreign branches and subsidiaries
  • R19. Higher-risk countries
  • R20. Reporting of suspicious transactions
  • R21. Tipping-off and confidentiality 
  • R22. Designated non-Financial Businesses and Professions: Customer due diligence
  • R23. Designated non-Financial Businesses and Professions: Other measures 


E - Transparency and Beneficial Ownership of Legal Persons and Arrangements

  • R24. Transparency and beneficial ownership of legal persons
  • R25. Transparency and beneficial ownership of legal arrangements 


F - Powers and Responsibilities of Competent Authorities and Other Institutional Measures

  • R26. Regulation and supervision of financial institutions
  • R27. Powers of supervisors
  • R28. Regulation and supervision of Designated non-Financial Businesses and Professions
  • R29. Financial intelligence units
  • R30. Responsibilities of law enforcement and investigative authorities 
  • R31. Powers of law enforcement and investigative authorities 
  • R32. Cash couriers 
  • R33. Statistics
  • R34. Guidance and feedback 
  • R35. Sanctions 


G - International Cooperation

  • R36. International instruments 
  • R37. Mutual legal assistance 
  • R38. Mutual legal assistance: freezing and confiscation
  • R39. Extradition 
  • R40. Other forms of international cooperation 


Mutual Evalution and Ranking of Members  

4th Round Ratings
In this chart, each member state, including the Associate members, is ranked on how well they comply with each of the 11 "Immediate Outcomes" and 40 Recommendations.  For example, the United States is currently not compliant with recommendations 22, 23, and 24 -- so, we don't do well in non-financial institutions, and our shell company games are impossible to monitor as of now, but we do generally do well in most others.  Clicking the "4th Round Ratings" label will take you to the full chart.  If you do international business, it may be a form of risk doing businesses in countries with poor ratings across the board here.

FATF Member Assessments

Each member is encouraged to perform regular assessments to measure themselves on how they are complying with the Forty Recommendations.  Here are example reports from the United States, but these reports are available for every country that participates in FATF or one of the Associate Members.  In the United States, these assessments are published by the Department of the Treasury.  These reports were issued in 2015 by the Treasury Undersecretary for Terrorism and Financial Intelligence, Adam Szubin.

2015 Money Laundering Risk Assessment

2015 Terrorist Financing Risk Assessment

The goal of sharing these examples is to serve as a reminder that from the FATF site, ALL such reports for all member states are available, by looking for the "Mutual Evalutions Publications." As of this writing the four newest ones are from Tunisia, Nicaragua, Panama, and Tajikistan.

FATF Associate Members

FATF also has 9 Regional Bodies, considered "FATF Associate Members" each of which put out specialized information for their portion of the world.  For those who are interested in that Region, following up on those specific regions reports from their representative task forces and groups will be worthwhile.

A Special Focus on Terrorist Financing Risks 

FATF issued their first special report offering guidance on Terrorist Financing in 2008:


Several more recent reports would be especially interesting regarding terrorist financing, stemming from an emergency meeting of 55 states, the United Nations, the Egmont Group of Financial Intelligence Units, the International Monetary Fund, the World Bank, and others specifically to address curbing the financing of ISIS/ISIL.



In the Paris meeting of 19OCT2018, FATF encouraged members to expand their focus from looking specifically at ISIL to more broadly include Al Qaeda and its Affiliates, issuing this guidance:



Regional Terrorist Financing Focuses

There have also been significant regional reports issued by sub-groups and associate members.

The Counter-Terrorism Financing Summit, hosted by Australia's Financial Intelligence Agency (AUSTRAC) and the Indonesian counterpart, Pusat Pelaporan dan Analisis Transaksi Keuangan (PPATK), issued the Regional Risk Assessment on Terrorism Financing 2016.  The following year, the event was repeated, adding Bank Negara Malaysia as a partner.  These events issued two small statements, and one more substantial report, addressing events in Philippines, Thailand, Malaysia, Singapore, Indonesia, and Australia, and how those events were funded.

A risk methodology for their region (p.22)

The Nusa Dua Statement - August 2016 
Kuala Lumpur Communique - November 2017 


West and Central Africa have very different concerns, and held a summit to discuss these differences, resulting in this excellent joint publication: 

"Terrorist Financing in West and Central Africa", October 2016
50 page joint report from FATF, GIABA, and GABAC


Particular Funding Methods for Terrorism Finance

Many other special reports have been issued, related to the trade in:

Virtual Currencies of Growing Concern

In the Paris meeting 19OCT2018, a special issue that was raised was the Regulation of Virtual Currencies.  This was deemed to be a matter of strategic interest that will be further evaluated, especially with regard to Initial Coin Offerings and their role in Money Laundering.  FATF has committed to work with the G20 to come up with new guidelines to update their previous report "Virtual Currencies: Key Definitions and Potential AML/CFT Risks" as well as their report "Guidance for a Risk-based Approach to Virtual Currencies" (June 2015 - 46 page PDF).  

The work so far is in the form of a report to the G20, which addresses many topics in addition to Virutal Currencies:


In part the report shares:

"Noting that virtual currencies/crypto-assets raise issues with respect to money laundering and terrorist financing, they committed to implement the FATF Standards as they apply to virtual currencies/crypto-assets.  They looked forward to the FATF review of those Standards, called on the FATF to advance global implementation, and asked the FATF to provide an update on this work in July 2018.  The FATF will take this work forward under the US presidency from 1 July 2018 to 30 June 2019."

This work begins with first reviewing laws and regulations regarding crypto-assets and virtual currencies in each of the G20 states.

More on this topic will certainly be forth-coming from FATF.




Magento Patches Command Execution, Local File Read Flaws

Magento recently addressed two vulnerabilities that could lead to command execution and local file read, a SCRT security researcher reveals. 

Written in PHP, Magento is a popular open-source e-commerce platform that is part of Adobe Experience Cloud. Vulnerabilities in Magento – and any other popular content management systems out there – are valuable to malicious actors, as they could be exploited to impact a large number of users.

In September last year, SCRT’s Daniel Le Gall found two vulnerabilities in Magento, both of which could be exploited with low privileges admin accounts, which are usually provided to Marketing users. 

The first of the two security bugs is a command execution using path traversal. Exploitation, the researchers reveal, requires the user to be able to create products. The second issue is a local file read that requires the user to be able to create email templates. 

The root cause of the first issue is a path traversal, whichLe Gall discoveredin a function that checks if a file that templates can be loaded from is located in certain directories. The faulty function only checks if the provided path begins by a specific directory name, but not if the resolved path is in the whitelisted directories. 

Because of the partial checks performed by the function, a path traversal can be called through a Product Design, but only to process .phtml files as PHP code. 

Although this is a forbidden extension on most upload forms, one could create a file with “Custom Options,” and could allow extensions they want to be uploaded, including phtml. Once ordered, the item is stored to a specific .extension, which allows for command execution, the researcher says. 

The second vulnerability was found in email templating, which allows the use of a special directive to load the content of a CSS file into the email. The two functions that are managing this directive are not checking for path traversal characters anywhere and an attacker could inject any file into the email template.

“Creating an email template with the {{css file="../../../../../../../../../../../../../../../etc/passwd"}} should be sufficient to trigger the vulnerability,”Le Gall says. 

The researcher disclosed both of these vulnerabilities in September last year, and a patch released at the end of November (Magento 2.2.7 and 2.1.16 released) addressed both of them. The researcher was awarded a total of $7500 in bug bounty rewards for the findings. 

RelatedHacked Magento Sites Steal Card Data, Spread Malware

RelatedMagento Patches Critical Vulnerability in eCommerce Platforms

Copyright 2010 Respective Author at Infosec Island

Apple’s Group FaceTime: A place for spies?

Apple has disabled Group FaceTime following discovery of a flaw that could potentially let people hear audio from other people’s devices without permission. What’s going on and what can you do about it?

The Group FaceTime bug, in brief

9to5Mac report based on a video published to Twitter by @BmManski that revealed this flaw lets a user listen to audio captured using another person’s device before they accept or reject the call requesting a FaceTime chat. The problem affects only iOS devices running iOS 12.1 or later (pending an update).

To read this article in full, please click here

PrivateVPN by Trunkspace review: A basic VPN with a solid no-logging commitment

PrivateVPN by Trunkspace in brief:

  • P2P allowed: Yes
  • Business location: Canada
  • Number of servers: 100+
  • Number of country locations: 48
  • Cost: $50 per year
  • VPN protocol: OpenVPN
  • Data encryption: AES-128-GCM (default)
  • Data authentication: SHA1 for HMAC authentication
  • Handshake: TLS-ECDHE-RSA-AES256-GCM-SHA384, 2048-bit RSA

PrivateVPN is a name we’ve heard before, but this new service from Trunkspace Hosting is not to be confused with Sweden-based PrivateVPN.com. Trunkspace’s PrivateVPN is a very basic affair that allows you to get online with servers in 48 countries. 

To read this article in full, please click here

APT39: An Iranian Cyber Espionage Group Focused on Personal Information

UPDATE (Jan. 30): Figure 1 has been updated to more accurately reflect APT39 targeting. Specifically, Australia, Norway and South Korea have been removed.

In December 2018, FireEye identified APT39 as an Iranian cyber espionage group responsible for widespread theft of personal information. We have tracked activity linked to this group since November 2014 in order to protect organizations from APT39 activity to date. APT39’s focus on the widespread theft of personal information sets it apart from other Iranian groups FireEye tracks, which have been linked to influence operations, disruptive attacks, and other threats. APT39 likely focuses on personal information to support monitoring, tracking, or surveillance operations that serve Iran’s national priorities, or potentially to create additional accesses and vectors to facilitate future campaigns. 

APT39 was created to bring together previous activities and methods used by this actor, and its activities largely align with a group publicly referred to as "Chafer." However, there are differences in what has been publicly reported due to the variances in how organizations track activity. APT39 primarily leverages the SEAWEED and CACHEMONEY backdoors along with a specific variant of the POWBAT backdoor. While APT39's targeting scope is global, its activities are concentrated in the Middle East. APT39 has prioritized the telecommunications sector, with additional targeting of the travel industry and IT firms that support it and the high-tech industry. The countries and industries targeted by APT39 are depicted in Figure 1.


Figure 1: Countries and industries targeted by APT39

Operational Intent

APT39's focus on the telecommunications and travel industries suggests intent to perform monitoring, tracking, or surveillance operations against specific individuals, collect proprietary or customer data for commercial or operational purposes that serve strategic requirements related to national priorities, or create additional accesses and vectors to facilitate future campaigns. Government entities targeting suggests a potential secondary intent to collect geopolitical data that may benefit nation-state decision making. Targeting data supports the belief that APT39's key mission is to track or monitor targets of interest, collect personal information, including travel itineraries, and gather customer data from telecommunications firms.

Iran Nexus Indicators

We have moderate confidence APT39 operations are conducted in support of Iranian national interests based on regional targeting patterns focused in the Middle East, infrastructure, timing, and similarities to APT34, a group that loosely aligns with activity publicly reported as “OilRig”. While APT39 and APT34 share some similarities, including malware distribution methods, POWBAT backdoor use, infrastructure nomenclature, and targeting overlaps, we consider APT39 to be distinct from APT34 given its use of a different POWBAT variant. It is possible that these groups work together or share resources at some level.

Attack Lifecycle

APT39 uses a variety of custom and publicly available malware and tools at all stages of the attack lifecycle.

Initial Compromise

For initial compromise, FireEye Intelligence has observed APT39 leverage spear phishing emails with malicious attachments and/or hyperlinks typically resulting in a POWBAT infection. APT39 frequently registers and leverages domains that masquerade as legitimate web services and organizations that are relevant to the intended target. Furthermore, this group has routinely identified and exploited vulnerable web servers of targeted organizations to install web shells, such as ANTAK and ASPXSPY, and used stolen legitimate credentials to compromise externally facing Outlook Web Access (OWA) resources.

Establish Foothold, Escalate Privileges, and Internal Reconnaissance

Post-compromise, APT39 leverages custom backdoors such as SEAWEED, CACHEMONEY, and a unique variant of POWBAT to establish a foothold in a target environment. During privilege escalation, freely available tools such as Mimikatz and Ncrack have been observed, in addition to legitimate tools such as Windows Credential Editor and ProcDump. Internal reconnaissance has been performed using custom scripts and both freely available and custom tools such as the port scanner, BLUETORCH.

Lateral Movement, Maintain Presence, and Complete Mission

APT39 facilitates lateral movement through myriad tools such as Remote Desktop Protocol (RDP), Secure Shell (SSH), PsExec, RemCom, and xCmdSvc. Custom tools such as REDTRIP, PINKTRIP, and BLUETRIP have also been used to create SOCKS5 proxies between infected hosts. In addition to using RDP for lateral movement, APT39 has used this protocol to maintain persistence in a victim environment. To complete its mission, APT39 typically archives stolen data with compression tools such as WinRAR or 7-Zip.


Figure 2: APT39 attack lifecycle

There are some indications that APT39 demonstrated a penchant for operational security to bypass detection efforts by network defenders, including the use of a modified version of Mimikatz that was repacked to thwart anti-virus detection in one case, as well as another instance when after gaining initial access APT39 performed credential harvesting outside of a compromised entity's environment to avoid detection.

Outlook

We believe APT39's significant targeting of the telecommunications and travel industries reflects efforts to collect personal information on targets of interest and customer data for the purposes of surveillance to facilitate future operations. Telecommunications firms are attractive targets given that they store large amounts of personal and customer information, provide access to critical infrastructure used for communications, and enable access to a wide range of potential targets across multiple verticals. APT39's targeting not only represents a threat to known targeted industries, but it extends to these organizations' clientele, which includes a wide variety of sectors and individuals on a global scale. APT39's activity showcases Iran's potential global operational reach and how it uses cyber operations as a low-cost and effective tool to facilitate the collection of key data on perceived national security threats and gain advantages against regional and global rivals.

Security Alert: Danish E-Shoppers Targeted by Another Wave of Nets.eu Phishing Campaign

In the world of online security, two things are clear: phishing remains a top threat, especially against online shoppers, and the cleverest attacks still target payment processors and financial companies. 

This week we observed a new Nets.eu phishing campaign, designed to piggyback off the popularity of this major company that provides the acquiring agreements for merchants to accept online payments. 

Instead of sending off compromised emails with phishing links that seem to appear from online stores or banks, malicious actors now move deeper in the payments processing link in the hopes of tricking users to willingly give up their login credentials. 

Nets, one of the biggest payments processors in Europe, has constantly seen its name hijacked and used in phishing scams. Just how big the scope of the issue is? 

So far, out of the tremendous number of compromised domains blocked by Thor Foresight, our researchers have observed 1535 domains containing variations on the name “Nets”, a lot of them with .dk or .de extensions to lend “legitimacy” to the URLs.  

The way this phishing attack is structured, it can fool even educated internet users.  

First off is the original malicious email, which alerts the receiver that Nets recorded a suspicious payment made outside of Denmark. It also prompts the receiver to take action to cancel a transaction and get a refund. 

To add even more legitimacy to the scam, the email even includes a CVR number, the unique identifier for any business registered in Denmark’s Central Business Register. However, a quick eye might notice bits of broken HTML code preceding that CVR number. 

Once clicked, the user is taken to “netsbeskytte.life/index.html” (a website quickly taken down once the email was flagged as spam) and asked to input their credentials. This page is the same whether visiting HTTP or https, which can prompt some browsers to disregard its malicious nature. 

Because it looks like a private portal hosted by a financial company, users don’t expect the URL to look particularly user-friendly, so they would go along with inputting their personal information in the fields.  

On Chrome and Firefox, the browser makes it clear that the user should proceed no further.  

On Internet Explorer, however, there is absolutely no alarm drawn over the lack of a security certificate or the potentially dangerous URL.  

This is doubly problematic since a lot of Outlook users leave Internet Explorer as a primary browser. 

As phishing continues to grow at an exponential rate, we urge online shoppers (and everyone else!) to exercise double caution in clicking any link received via email. If that link redirects to a page that demands your login, open a separate browser, Google search the service in question and perform the operation from the legitimate website.  

As an extra rule of thumb, be extra suspicious of any email that comes from a bank, a payment processor or an online store, especially if it tries to warn you of fraudulent payment.  

Because attacks like this one come and go with incredible speed, with malicious websites being taken down and reuploaded on a different address as soon as a security researcher discover them, it’s important that users know how to prevent phishing. 

We put together these 4 resources to learn to protect yourself from phishing and other online attacks designed to obtain your sensitive information: 

*This article features cyber intelligence provided by CSIS Security Group researchers.

The post Security Alert: Danish E-Shoppers Targeted by Another Wave of Nets.eu Phishing Campaign appeared first on Heimdal Security Blog.

Passwords in a file

My dad is on some sort of committee for his local home owners association. He asked about saving all the passwords in a file stored on Microsoft's cloud OneDrive, along with policy/procedures for the association. I assumed he called because I'm an internationally recognized cyberexpert. Or maybe he just wanted to chat with me*. Anyway, I thought I'd write up a response.

The most important rule of cybersecurity is that it depends upon the risks/costs. That means if what you want to do is write down the procedures for operating a garden pump, including the passwords, then that's fine. This is because there's not much danger of hackers exploiting this. On the other hand, if the question is passwords for the association's bank account, then DON'T DO THIS. Such passwords should never be online. Instead, write them down and store the pieces of paper in a secure place.

OneDrive is secure, as much as anything is. The problem is that people aren't secure. There's probably one member of the home owner's association who is constantly infecting themselves with viruses or falling victim to scams. This is the person who you are giving OneDrive access to. This is fine for the meaningless passwords, but very much not fine for bank accounts.

OneDrive also has some useful backup features. Thus, when one of your members infects themselves with ransomware, which will encrypt all the OneDrive's contents, you can retrieve the old versions of the documents. I highly recommend groups like the home owner's association use OneDrive. I use it as part of my Office 365 subscription for $99/year.

Just don't do this for banking passwords. In fact, not only should you not store such a password online, you should strongly consider getting "two factor authentication" setup for the account. This is a system where you need an additional hardware device/token in addition to a password (in some cases, your phone can be used as the additional device). This may not work if multiple people need to access a common account, but then, you should have multiple passwords, for each individual, in such cases. Your bank should have descriptions of how to set this up. If your bank doesn't offer two factor authentication for its websites, then you really need to switch banks.

For individuals, write your passwords down on paper. For elderly parents, write down a copy and give it to your kids. It should go without saying: store that paper in a safe place, ideally a safe, not a post-it note glued to your monitor. Again, this is for your important passwords, like for bank accounts and e-mail. For your Spotify or Pandora accounts (music services), then security really doesn't matter.

Lastly, the way hackers most often break into things like bank accounts is because people use the same password everywhere. When one site gets hacked, those passwords are then used to hack accounts on other websites. Thus, for important accounts, don't reuse passwords, make them unique for just that account. Since you can't remember unique passwords for every account, write them down.

You can check if your password has been hacked this way by checking http://haveibeenpwned.com and entering your email address. Entering my dad's email address, I find that his accounts at Adobe, LinkedIn, and Disqus has been discovered by hackers (due to hacks of those websites) and published. I sure hope whatever these passwords were that they are not the same or similar to his passwords for GMail or his bank account.




* the lame joke at the top was my dad's, so don't blame me :-)

Risky Business #528 — Huawei dinged, epic FaceTime and Exchange bugs

Adam Boileau co-hosts this week’s Risky Business episode. We talk about:

  • The Huawei indictments
  • The epic Facetime logic bug
  • The even more epic Exchange privesc bug
  • CISA’s “fix yo DNS” directive
  • Black Cube busted doing shady stuff to Citizen Lab
  • Yahoo shareholder lawsuit settlement makes directors twitchy
  • Internet filtering kicks off in Venezuela
  • Much, much MORE!

This week’s show is brought to you by Thinkst Canary – they make hardware honeypots and the tools you need to deploy canarytokens at scale. They also make virtual honeypots! This week Thinkst’s founder Haroon Meer will be along to wave his finger at basically all of us over what he sees as the security discipline’s tendency to not really learn anything from security conferences. It’s “contertainment,” he says, followed by “GET OFF MY LAWN”.

Links to everything that we discussed are below and you can follow Patrick or Adam on Twitter if that’s your thing.

Show notes

US hammers Huawei with 23 indictments for stolen trade secrets, fraud - CNET
Major iPhone FaceTime bug lets you hear the audio of the person you are calling ... before they pick up - 9to5Mac
Abusing Exchange: One API call away from Domain Admin - dirkjanm.io
DHS: Multiple US gov domains hit in serious DNS hijacking wave | Ars Technica
cyber.dhs.gov - Emergency Directive 19-01
Rep. Langevin: We need a DHS briefing to understand extent of DNS hijacking threat
ALERT: DNS hijacking activity - NCSC Site
APNewsBreak: Undercover agents target cybersecurity watchdog
Japanese government plans to hack into citizens' IoT devices | ZDNet
Internet experiment goes wrong, takes down a bunch of Linux routers | ZDNet
Lessons for Corporate Boardrooms From Yahoo’s Cybersecurity Settlement - The New York Times
Mystery still surrounds hack of PHP PEAR website | ZDNet
WordPress sites under attack via zero-day in abandoned plugin | ZDNet
OONI report into Internet filtering in Venezuela
Tonga sent back to 'dark ages' after underwater Internet cable severed | Fox News
Opinion | Mueller’s Real Target in the Roger Stone Indictment - The New York Times
Exclusive: Ukraine says it sees surge in cyber attacks targeting election | Reuters
This Time It’s Russia’s Emails Getting Leaked
Russia Targeting British Institute In Disinformation Campaign
Unsecured MongoDB databases expose Kremlin's backdoor into Russian businesses | ZDNet
Facebook to encrypt Instagram messages ahead of integration with WhatsApp, Facebook Messenger | TechCrunch
Cryptopia funds still being drained by hackers while police investigated | RNZ News
Europol arrests UK man for stealing €10 million worth of IOTA cryptocurrency | ZDNet
Police license plate readers are still exposed on the internet | TechCrunch
Malvertising campaign targets Apple users with malicious code hidden in images | ZDNet
Hackers are going after Cisco RV320/RV325 routers using a new exploit | ZDNet
Spencer Dailey on Twitter: "hard to understate how bad this flaw is--shocked more pubs haven't picked up on this. The affected chip is ubiquitous, the potential exploits allow anyone within wifi-range to run arbitrary code on the machine. Wifi routers themselves use affected chip 🤯 https://t.co/XQx4SobJtj"
GitHub - hannob/apache-uaf: Apache use after free bug infos / ASAN stack traces
Lesley Carhart on Twitter: "At the very least I’ll be able to publish these questions so that other people can grill their properties should they forcibly migrate to IoT equipment."
APT39: An Iranian Cyber Espionage Group Focused on Personal Information « APT39: An Iranian Cyber Espionage Group Focused on Personal Information | FireEye Inc
44CON 2013 - A talk about (info-sec) talks - Haroon Meer - YouTube

43% of Cybercrimes Target Small Businesses – Are You Next?

Cybercrimes cost UK small companies an average of £894 in the year ending February of 2018. Small businesses are an easy target for cybercrooks, so it little surprise that around about 43% of cybercrime is committed against small businesses. According to research conducted by EveryCloud, there is much more at stake than a £900 annual loss, with six out of ten small businesses closing within six months of a data breach.

Damage to a small company’s reputation can be difficult to repair and recover from following a data breach. Since the GDPR data privacy law came in force in May 2018, companies face significant financial sanctions from regulators if found negligent in safeguarding personal information. Add in the potential for civil suits the potential costs start mounting up fast, which could even turn into a business killer.  Case in point is political consulting and data mining firm Cambridge Analytica, which went under in May 2018 after being implicated with data privacy issues related to its use of personal data held on Facebook. However, most small businesses taken out by cyber attacks don't have the public profile to make the deadly headlines.

Most big companies have contingency plans and resources to take the hit from a major cyber attack, although major cyber attacks prove highly costly to big business, the vast majority are able to recover and continue trading. Working on a tight budget, small businesses just doesn't the deep pockets of big business. Cyber resilience is not a high priority within most small businesses strategies, as you might image business plans are typically very business growth focused.

Cyber resilience within small business need not be difficult, but it does involve going beyond installing antivirus. A great starting point is UK National Cyber Security Centre's Cyber Essentials Scheme, a simple but effective approach to help businesses protect themselves from the most common cyber attacks. You’ll also need to pay attention to staff security awareness training in the workplace.

Every employee must ensure that the company is protected from attacks as much as possible. It’s your responsibility to make sure that everyone understands this and knows what preventative measures to put in place.

It may cost a few bob, but getting an expert in to check for holes in your cybersecurity is a good place to start. They can check for potential risk areas and also educate you and your staff about security awareness.

We all know the basics, but how many times do we let convenience trump good common sense? For example, how many times have you used the same password when registering for different sites?

How strong is the password that you chose? If it’s easy for you to remember, then there’s a good chance that it’s not as secure as you’d like. If you’d like more tips on keeping your information secure, then check out the infographic below.


Trying DetectionLab

Many security professionals run personal labs. Trying to create an environment that includes fairly modern Windows systems can be a challenge. In the age of "infrastructure as code," there should be a simpler way to deploy systems in a repeatable, virtualized way -- right?

Enter DetectionLab, a project by Chris Long. Briefly, Chris built a project that uses Packer and Vagrant to create an instrumented lab environment. Chris explained the project in late 2017 in a Medium post, which I recommend reading.

I can't even begin to describe all the functionality packed into this project. So much of it is new, but this is a great way to learn about it. In this post, I would like to show how I got a version of DetectionLab running.

My build environment included a modern laptop with 16 GB RAM and Windows 10 professional. I had already installed Virtualbox 6.0 with the appropriate VirtualBox Extension Pack. I had also enabled the native OpenSSH server and performed all DetectionLab installation functions over an OpenSSH session.

Install Chocolatey

My first step was to install Chocolatey, a package manager for Windows. I wanted to use this to install the Git client I wanted to use to clone the DetectionLab repo. Commands I typed at each stage are in bold below.

root@LAPTOP-HT4TGVCP C:\Users\root>@"%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe" -NoProfile -InputFormat None -ExecutionPolicy Bypass -Command "ie
x ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))" && SET "PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin"
Getting latest version of the Chocolatey package for download.
Getting Chocolatey from https://chocolatey.org/api/v2/package/chocolatey/0.10.11.
Downloading 7-Zip commandline tool prior to extraction.
Extracting C:\Users\root\AppData\Local\Temp\chocolatey\chocInstall\chocolatey.zip to C:\Users\root\AppData\Local\Temp\chocolatey\chocInstall...
Installing chocolatey on this machine
Creating ChocolateyInstall as an environment variable (targeting 'Machine')
  Setting ChocolateyInstall to 'C:\ProgramData\chocolatey'
WARNING: It's very likely you will need to close and reopen your shell
  before you can use choco.
Restricting write permissions to Administrators
We are setting up the Chocolatey package repository.
The packages themselves go to 'C:\ProgramData\chocolatey\lib'
  (i.e. C:\ProgramData\chocolatey\lib\yourPackageName).
A shim file for the command line goes to 'C:\ProgramData\chocolatey\bin'
  and points to an executable in 'C:\ProgramData\chocolatey\lib\yourPackageName'.

Creating Chocolatey folders if they do not already exist.

WARNING: You can safely ignore errors related to missing log files when
  upgrading from a version of Chocolatey less than 0.9.9.
  'Batch file could not be found' is also safe to ignore.
  'The system cannot find the file specified' - also safe.
chocolatey.nupkg file not installed in lib.
 Attempting to locate it from bootstrapper.
PATH environment variable does not have C:\ProgramData\chocolatey\bin in it. Adding...
WARNING: Not setting tab completion: Profile file does not exist at 'C:\Users\root\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1'.
Chocolatey (choco.exe) is now ready.
You can call choco from anywhere, command line or powershell by typing choco.
Run choco /? for a list of functions.
You may need to shut down and restart powershell and/or consoles
 first prior to using choco.
Ensuring chocolatey commands are on the path
Ensuring chocolatey.nupkg is in the lib folder

root@LAPTOP-HT4TGVCP C:\Users\root>choco
Chocolatey v0.10.11
Please run 'choco -?' or 'choco -?' for help menu.

Install Git

With Chocolatey installed, I could install Git.

root@LAPTOP-HT4TGVCP C:\Users\root>choco install git -params '"/GitAndUnixToolsOnPath"'
Chocolatey v0.10.11
Installing the following packages:
git
By installing you accept licenses for the packages.
Progress: Downloading git.install 2.20.1... 100%
Progress: Downloading chocolatey-core.extension 1.3.3... 100%
Progress: Downloading git 2.20.1... 100%

chocolatey-core.extension v1.3.3 [Approved]
chocolatey-core.extension package files install completed. Performing other installation steps.
 Installed/updated chocolatey-core extensions.
 The install of chocolatey-core.extension was successful.
  Software installed to 'C:\ProgramData\chocolatey\extensions\chocolatey-core'

git.install v2.20.1 [Approved]
git.install package files install completed. Performing other installation steps.
The package git.install wants to run 'chocolateyInstall.ps1'.
Note: If you don't run this script, the installation will fail.
Note: To confirm automatically next time, use '-y' or consider:
choco feature enable -n allowGlobalConfirmation
Do you want to run the script?([Y]es/[N]o/[P]rint): y

@{Inno Setup CodeFile: Path Option=CmdTools; PSPath=Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\Git_
is1; PSParentPath=Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall; PSChildName=Git_is1; PSDrive=HKLM; PS
Provider=Microsoft.PowerShell.Core\Registry}
Using Git LFS
Installing 64-bit git.install...
git.install has been installed.
git.install installed to 'C:\Program Files\Git'
  git.install can be automatically uninstalled.
Environment Vars (like PATH) have changed. Close/reopen your shell to
 see the changes (or in powershell/cmd.exe just type `refreshenv`).
 The install of git.install was successful.
  Software installed to 'C:\Program Files\Git\'

git v2.20.1 [Approved]
git package files install completed. Performing other installation steps.
 The install of git was successful.
  Software install location not explicitly set, could be in package or
  default install location if installer.

Chocolatey installed 3/3 packages.
 See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).

Clone DetectionLab

With Git installed, I can clone the DetectionLab repo from Github.

root@LAPTOP-HT4TGVCP C:\Users\root>mkdir git

root@LAPTOP-HT4TGVCP C:\Users\root>cd git

root@LAPTOP-HT4TGVCP C:\Users\root\git>mkdir detectionlab

root@LAPTOP-HT4TGVCP C:\Users\root\git>cd detectionlab

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab>git clone https://github.com/clong/DetectionLab.git
'git' is not recognized as an internal or external command,
operable program or batch file.

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab>refreshenv
Refreshing environment variables from registry for cmd.exe. Please wait...Finished..

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab>git clone https://github.com/clong/DetectionLab.git
Cloning into 'DetectionLab'...
remote: Enumerating objects: 1, done.
remote: Counting objects: 100% (1/1), done.
remote: Total 1163 (delta 0), reused 0 (delta 0), pack-reused 1162R
Receiving objects: 100% (1163/1163), 11.81 MiB | 12.24 MiB/s, done.
Resolving deltas: 100% (568/568), done.

Install Vagrant

Before going any further, I needed to install Vagrant.

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab>cd ..\..\

root@LAPTOP-HT4TGVCP C:\Users\root>choco install vagrant
Chocolatey v0.10.11
Installing the following packages:
vagrant
By installing you accept licenses for the packages.
Progress: Downloading vagrant 2.2.3... 100%

vagrant v2.2.3 [Approved]
vagrant package files install completed. Performing other installation steps.
The package vagrant wants to run 'chocolateyinstall.ps1'.
Note: If you don't run this script, the installation will fail.
Note: To confirm automatically next time, use '-y' or consider:
choco feature enable -n allowGlobalConfirmation
Do you want to run the script?([Y]es/[N]o/[P]rint): y

Downloading vagrant 64 bit
  from 'https://releases.hashicorp.com/vagrant/2.2.3/vagrant_2.2.3_x86_64.msi'
Progress: 100% - Completed download of C:\Users\root\AppData\Local\Temp\chocolatey\vagrant\2.2.3\vagrant_2.2.3_x86_64.msi (229.22 MB).
Download of vagrant_2.2.3_x86_64.msi (229.22 MB) completed.
Hashes match.
Installing vagrant...
vagrant has been installed.
Repairing currently installed global plugins. This may take a few minutes...
Installed plugins successfully repaired!
  vagrant may be able to be automatically uninstalled.
Environment Vars (like PATH) have changed. Close/reopen your shell to
 see the changes (or in powershell/cmd.exe just type `refreshenv`).
 The install of vagrant was successful.
  Software installed as 'msi', install location is likely default.

Chocolatey installed 1/1 packages.
 See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).

Packages requiring reboot:
 - vagrant (exit code 3010)

The recent package changes indicate a reboot is necessary.
 Please reboot at your earliest convenience.

root@LAPTOP-HT4TGVCP C:\Users\root>shutdown /r /t 0

Installing DetectionLab

Now we are finally at the point where we can install DetectionLab. Note that in my example, I downloaded boxes already built by Chris. I did not build my own, in order to save time. You can follow his instructions to build boxes yourself.

I saw an error regarding the win10 host, but that did not appear to be a real problem.

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab>powershell
Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.

PS C:\Users\root\git\detectionlab\DetectionLab> .\build.ps1 -ProviderName virtualbox -VagrantOnly
[preflight_checks] Running..
[preflight_checks] Checking if Vagrant is installed
[preflight_checks] Checking for pre-existing boxes..
[preflight_checks] Checking for vagrant instances..
[preflight_checks] Checking disk space..
[preflight_checks] Checking if vagrant-reload is installed..
The vagrant-reload plugin is required and not currently installed. This script will attempt to install it now.
Installing the 'vagrant-reload' plugin. This can take a few minutes...
Installed the plugin 'vagrant-reload (0.0.1)'!
[preflight_checks] Finished.
[download_boxes] Running..
[download_boxes] Downloading windows_10_virtualbox.box

[download_boxes] Downloading windows_2016_virtualbox.box
[download_boxes] Getting filehash for: windows_10_virtualbox.box
[download_boxes] Getting filehash for: windows_2016_virtualbox.box
[download_boxes] Checking Filehashes..
[download_boxes] Finished.
[main] Running vagrant_up_host for: logger
[vagrant_up_host] Running for logger
Attempting to bring up the logger host using Vagrant
[vagrant_up_host] Finished for logger. Got exit code: 0
[main] vagrant_up_host finished. Exitcode: 0
Good news! logger was built successfully!
[main] Finished for: logger
[main] Running vagrant_up_host for: dc
[vagrant_up_host] Running for dc
Attempting to bring up the dc host using Vagrant
[vagrant_up_host] Finished for dc. Got exit code: 0
[main] vagrant_up_host finished. Exitcode: 0
Good news! dc was built successfully!
[main] Finished for: dc
[main] Running vagrant_up_host for: wef
[vagrant_up_host] Running for wef
Attempting to bring up the wef host using Vagrant
[vagrant_up_host] Finished for wef. Got exit code: 0
[main] vagrant_up_host finished. Exitcode: 0
Good news! wef was built successfully!
[main] Finished for: wef
[main] Running vagrant_up_host for: win10
[vagrant_up_host] Running for win10
Attempting to bring up the win10 host using Vagrant
[vagrant_up_host] Finished for win10. Got exit code: 1
[main] vagrant_up_host finished. Exitcode: 1
WARNING: Something went wrong while attempting to build the win10 box.
Attempting to reload and reprovision the host...
[main] Running vagrant_reload_host for: win10
[vagrant_reload_host] Running for win10
[vagrant_reload_host] Finished for win10. Got exit code: 1
C:\Users\root\git\detectionlab\DetectionLab\build.ps1 : Failed to bring up win10 after a reload. Exiting
At line:1 char:1
+ .\build.ps1 -ProviderName virtualbox -VagrantOnly
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Write-Error], WriteErrorException
    + FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,build.ps1

[main] Running post_build_checks
[post_build_checks] Running Caldera Check.
[download] Running for https://192.168.38.105:8888, looking for
[download] Found at https://192.168.38.105:8888
[post_build_checks] Cladera Result: True
[post_build_checks] Running Splunk Check.
[download] Running for https://192.168.38.105:8000/en-US/account/login?return_to=%2Fen-US%2F, looking for This browser is not supported by Splunk
[download] Found This browser is not supported by Splunk at https://192.168.38.105:8000/en-US/account/login?return_to=%2Fen-US%2F
[post_build_checks] Splunk Result: True
[post_build_checks] Running Fleet Check.
[download] Running for https://192.168.38.105:8412, looking for Kolide Fleet
[download] Found Kolide Fleet at https://192.168.38.105:8412
[post_build_checks] Fleet Result: True
[post_build_checks] Running MS ATA Check.
[download] Running for https://192.168.38.103, looking for
[post_build_checks] ATA Result: True
[main] Finished post_build_checks

Checking the VMs

I used the Virtualbox command line program to check the status of the new VMs.

root@LAPTOP-HT4TGVCP c:\Program Files\Oracle\VirtualBox>VBoxManage list runningvms
"logger" {3da9fffb-4b02-4e57-a592-dd2322f14245}
"dc.windomain.local" {ef32d493-845c-45dc-aff7-3a86d9c590cd}
"wef.windomain.local" {7cd008b7-c6e0-421d-9655-8f92ec98d9d7}
"win10.windomain.local" {acf413fb-6358-44df-ab9f-cc7767ed32bd}

Interacting with Vagrant and the Logger Host

Next I decided to use Vagrant to check on the status of the boxes, and to interact with one if I could. I wanted to find the Bro and Suricata logs.

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab>cd Vagrant

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>vagrant status
Current machine states:

logger                    running (virtualbox)
dc                        running (virtualbox)
wef                       running (virtualbox)
win10                     running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.


root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>vagrant ssh logger

Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-131-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

29 packages can be updated.
24 updates are security updates.


Last login: Sun Jan 27 19:24:05 2019 from 10.0.2.2

root@logger:~# /opt/bro/bin/broctl status
Name         Type    Host             Status    Pid    Started
manager      manager localhost        running   9848   27 Jan 17:19:15
proxy        proxy   localhost        running   9893   27 Jan 17:19:16
worker-eth1-1 worker  localhost        running   9945   27 Jan 17:19:17
worker-eth1-2 worker  localhost        running   9948   27 Jan 17:19:17

vagrant@logger:~$ ls -al /opt/bro
total 32
drwxr-xr-x 8 root root 4096 Jan 27 17:19 .
drwxr-xr-x 5 root root 4096 Jan 27 17:19 ..
drwxr-xr-x 2 root root 4096 Jan 27 17:19 bin
drwxrwsr-x 2 root bro  4096 Jan 27 17:19 etc
drwxr-xr-x 3 root root 4096 Jan 27 17:19 lib
drwxrws--- 3 root bro  4096 Jan 27 18:00 logs
drwxr-xr-x 4 root root 4096 Jan 27 17:19 share
drwxrws--- 8 root bro  4096 Jan 27 17:19 spool

vagrant@logger:~$ ls -al /opt/bro/logs
ls: cannot open directory '/opt/bro/logs': Permission denied

vagrant@logger:~$ sudo bash

root@logger:~# ls -al /opt/bro/logs/
2019-01-27/ current/

root@logger:~# ls -al /opt/bro/logs/current/
total 3664
drwxr-sr-x 2 root bro    4096 Jan 27 19:20 .
drwxrws--- 8 root bro    4096 Jan 27 17:19 ..
-rw-r--r-- 1 root bro     475 Jan 27 19:19 capture_loss.log
-rw-r--r-- 1 root bro     127 Jan 27 17:19 .cmdline
-rw-r--r-- 1 root bro   83234 Jan 27 19:30 communication.log
-rw-r--r-- 1 root bro 1430714 Jan 27 19:30 conn.log
-rw-r--r-- 1 root bro    1340 Jan 27 19:00 dce_rpc.log
-rw-r--r-- 1 root bro  185114 Jan 27 19:28 dns.log
-rw-r--r-- 1 root bro     310 Jan 27 17:19 .env_vars
-rw-r--r-- 1 root bro  139387 Jan 27 19:30 files.log
-rw-r--r-- 1 root bro  544416 Jan 27 19:30 http.log
-rw-r--r-- 1 root bro     224 Jan 27 19:05 known_services.log
-rw-r--r-- 1 root bro     956 Jan 27 19:19 notice.log
-rw-r--r-- 1 root bro       5 Jan 27 17:19 .pid
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.capture_loss
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.communication
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.conn
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.conn-summary
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.dce_rpc
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.dns
-rw-r--r-- 1 root bro      18 Jan 27 18:00 .rotated.dpd
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.files
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.http
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.kerberos
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.known_certs
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.known_hosts
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.known_services
-rw-r--r-- 1 root bro      18 Jan 27 18:00 .rotated.loaded_scripts
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.notice
-rw-r--r-- 1 root bro      18 Jan 27 18:00 .rotated.packet_filter
-rw-r--r-- 1 root bro      18 Jan 27 18:00 .rotated.reporter
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.smb_files
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.smb_mapping
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.software
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.ssl
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.stats
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.weird
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.x509
-rw-r--r-- 1 root bro    1311 Jan 27 19:24 smb_mapping.log
-rw-r--r-- 1 root bro   15767 Jan 27 19:30 ssl.log
-rw-r--r-- 1 root bro      58 Jan 27 17:19 .startup
-rw-r--r-- 1 root bro   11326 Jan 27 19:30 stats.log
-rwx------ 1 root bro      18 Jan 27 17:19 .status
-rw-r--r-- 1 root bro      80 Jan 27 19:00 stderr.log
-rw-r--r-- 1 root bro     188 Jan 27 17:19 stdout.log
-rw-r--r-- 1 root bro 1141860 Jan 27 19:30 weird.log
-rw-r--r-- 1 root bro    2799 Jan 27 19:20 x509.log

root@logger:~# cd /opt/bro/logs/

root@logger:/opt/bro/logs# ls
2019-01-27  current

root@logger:/opt/bro/logs# cd current

root@logger:/opt/bro/logs/current# ls

capture_loss.log   conn.log     dns.log    http.log            notice.log     smb_mapping.log  stats.log   stdout.log  x509.log
communication.log  dce_rpc.log  files.log  known_services.log  smb_files.log  ssl.log          stderr.log  weird.log

root@logger:/opt/bro/logs/current# jq  -c . dce_rpc.log  | head

{"ts":1548615615.836272,"uid":"CEmNr31qusp3G7GFg4","id.orig_h":"192.168.38.104","id.orig_p":49758,"id.resp_h":"192.168.38.102","id.resp_p":135,"named_pipe":
"135","endpoint":"epmapper","operation":"ept_map"}
{"ts":1548615615.83961,"uid":"CJO7xe4JUo43IjGG01","id.orig_h":"192.168.38.104","id.orig_p":49759,"id.resp_h":"192.168.38.102","id.resp_p":49667,"rtt":0.0003
57,"named_pipe":"49667","endpoint":"lsarpc","operation":"LsarLookupSids3"}
{"ts":1548615615.851544,"uid":"CJO7xe4JUo43IjGG01","id.orig_h":"192.168.38.104","id.orig_p":49759,"id.resp_h":"192.168.38.102","id.resp_p":49667,"rtt":0.000
596,"named_pipe":"49667","endpoint":"lsarpc","operation":"LsarLookupSids3"}
{"ts":1548615615.835628,"uid":"CgEcizh05xJ1ricP8","id.orig_h":"192.168.38.104","id.orig_p":49758,"id.resp_h":"192.168.38.102","id.resp_p":135,"named_pipe":"
135","endpoint":"epmapper","operation":"ept_map"}
{"ts":1548615615.839587,"uid":"CVV6WZ3vgpE673rl6a","id.orig_h":"192.168.38.104","id.orig_p":49759,"id.resp_h":"192.168.38.102","id.resp_p":49667,"rtt":0.000
382,"named_pipe":"49667","endpoint":"lsarpc","operation":"LsarLookupSids3"}
{"ts":1548615615.852193,"uid":"CVV6WZ3vgpE673rl6a","id.orig_h":"192.168.38.104","id.orig_p":49759,"id.resp_h":"192.168.38.102","id.resp_p":49667,"rtt":0.000
382,"named_pipe":"49667","endpoint":"lsarpc","operation":"LsarLookupSids3"}
{"ts":1548618003.200283,"uid":"CGYDww34wYz8eCYr96","id.orig_h":"192.168.38.103","id.orig_p":63295,"id.resp_h":"192.168.38.102","id.resp_p":135,"named_pipe":
"135","endpoint":"epmapper","operation":"ept_map"}
{"ts":1548618003.200403,"uid":"CrTZMz2nCXsiY5WOF8","id.orig_h":"192.168.38.103","id.orig_p":63295,"id.resp_h":"192.168.38.102","id.resp_p":135,"named_pipe":
"135","endpoint":"epmapper","operation":"ept_map"}

root@logger:~# head /var/log/suricata/fast.log

01/27/2019-17:19:08.133030  [**] [1:2013028:4] ET POLICY curl User-Agent Outbound [**] [Classification: Attempted Information Leak] [Priority: 2] {TCP} 10.0
.2.15:51574 -> 195.135.221.134:80
01/27/2019-17:19:08.292747  [**] [1:2013504:5] ET POLICY GNU/Linux APT User-Agent Outbound likely related to package management [**] [Classification: Not Su
spicious Traffic] [Priority: 3] {TCP} 10.0.2.15:55260 -> 99.84.178.103:80
01/27/2019-17:19:08.356618  [**] [1:2013504:5] ET POLICY GNU/Linux APT User-Agent Outbound likely related to package management [**] [Classification: Not Su
spicious Traffic] [Priority: 3] {TCP} 10.0.2.15:55260 -> 99.84.178.103:80
01/27/2019-17:19:08.432477  [**] [1:2013504:5] ET POLICY GNU/Linux APT User-Agent Outbound likely related to package management [**] [Classification: Not Su
spicious Traffic] [Priority: 3] {TCP} 10.0.2.15:46630 -> 91.189.95.83:80
01/27/2019-17:19:08.448249  [**] [1:2013504:5] ET POLICY GNU/Linux APT User-Agent Outbound likely related to package management [**] [Classification: Not Su
spicious Traffic] [Priority: 3] {TCP} 10.0.2.15:53932 -> 91.189.88.161:80
...trimmed...

Docker on the Logger Host

Chris is using Docker to provide some of the Logger host functions, e.g.:

root@logger:~# docker container ls
CONTAINER ID        IMAGE                    COMMAND                   CREATED             STATUS              PORTS                              NAMES
343c18f933d9        kolide/fleet:latest      "sh -c 'echo '\\n' | …"   3 hours ago         Up 3 hours          0.0.0.0:8412->8412/tcp             kolidequic
kstart_fleet_1
513cb0d61401        mysql:5.7                "docker-entrypoint.s…"    3 hours ago         Up 3 hours          3306/tcp, 33060/tcp                kolidequic
kstart_mysql_1
b0278855b130        mailhog/mailhog:latest   "MailHog"                 3 hours ago         Up 3 hours          1025/tcp, 0.0.0.0:8025->8025/tcp   kolidequic
kstart_mailhog_1
ddcd3e59dda2        redis:3.2.4              "docker-entrypoint.s…"    3 hours ago         Up 3 hours          6379/tcp                           kolidequic
kstart_redis_1

Troubleshooting Localhost Bindings

One of the issues I encountered involved the IP addresses to which VMs bound their Virtualbox Remote Display Protocol servers. The default configuration bound them to localhost on my Windows laptop. That was ok if I was interacting with that laptop in person, but I was doing this work remotely.

I could RDP to the laptop, and then RDP from the laptop to the VMs. This works, but it was a slight hassle to log into the Windows 2016 server which required a ctrl-alt-del sequence. This is a nuanced issue that can be solved by using the on screen keyboard (osk.exe) to enter the ctrl-alt-end sequence on the remote laptop, but I wanted an easier solution.

Dustin Lee, who has done a lot of work customized DetectionLab to include Security Onion (a future post maybe?) suggested I modify the Vagrantfile with the following bolded content. This example is for the wef host in the Vagrantfile.

    cfg.vm.provider "virtualbox" do |vb, override|
      vb.gui = true
      vb.name = "wef.windomain.local"
      vb.default_nic_type = "82545EM"
      vb.customize ["modifyvm", :id, "--vrde", "on"]
      vb.customize ["modifyvm", :id, "--vrdeaddress", "0.0.0.0"]
      vb.customize ["modifyvm", :id, "--memory", 2048]
      vb.customize ["modifyvm", :id, "--cpus", 2]
      vb.customize ["modifyvm", :id, "--vram", "32"]
      vb.customize ["modifyvm", :id, "--clipboard", "bidirectional"]
      vb.customize ["setextradata", "global", "GUI/SuppressMessages", "all" ]
    end
Basically, add the bold entries wherever you see a "virtualbox" option, to enable the VRDP server binding to 0.0.0.0 (which should be all IP addresses, to include the public IP, as I want).

Before I restarted the wef host, you can see below how the VRDE server is listening only on localhost on port 5932 (in bold).

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>"c:\Program Files\Oracle\VirtualBox\VBoxManage.exe" showvminfo wef.windomain.local
| findstr /I vrde

VRDE:                        enabled (Address 127.0.0.1, Ports 5932, MultiConn: off, ReuseSingleConn: off, Authentication type: null)
...trimmed...

After changing the Vagrant file, I restarted the wef host using Vagrant.

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>vagrant reload wef
==> wef: Attempting graceful shutdown of VM...
==> wef: Clearing any previously set forwarded ports...
==> wef: Fixed port collision for 3389 => 3389. Now on port 2201.
==> wef: Fixed port collision for 22 => 2222. Now on port 2202.
==> wef: Fixed port collision for 5985 => 55985. Now on port 2203.
==> wef: Fixed port collision for 5986 => 55986. Now on port 2204.
==> wef: Clearing any previously set network interfaces...
==> wef: Preparing network interfaces based on configuration...
    wef: Adapter 1: nat
    wef: Adapter 2: hostonly
==> wef: Forwarding ports...
    wef: 3389 (guest) => 2201 (host) (adapter 1)
    wef: 22 (guest) => 2202 (host) (adapter 1)
    wef: 5985 (guest) => 2203 (host) (adapter 1)
    wef: 5986 (guest) => 2204 (host) (adapter 1)
==> wef: Running 'pre-boot' VM customizations...
==> wef: Booting VM...
==> wef: Waiting for machine to boot. This may take a few minutes...
    wef: WinRM address: 127.0.0.1:2203
    wef: WinRM username: vagrant
    wef: WinRM execution_time_limit: PT2H
    wef: WinRM transport: negotiate
==> wef: Machine booted and ready!
==> wef: Checking for guest additions in VM...
    wef: The guest additions on this VM do not match the installed version of
    wef: VirtualBox! In most cases this is fine, but in rare cases it can
    wef: prevent things such as shared folders from working properly. If you see
    wef: shared folder errors, please make sure the guest additions within the
    wef: virtual machine match the version of VirtualBox you have installed on
    wef: your host and reload your VM.
    wef:
    wef: Guest Additions Version: 5.2.16
    wef: VirtualBox Version: 6.0
==> wef: Setting hostname...
==> wef: Configuring and enabling network interfaces...
==> wef: Mounting shared folders...

    wef: /vagrant =>  

The bolded entry about port 2201 means I can log into the wef host as user vagrant / password vagrant, over RDP, directly from another computer.


After restarting the wef host, I check to see what IP address and port the RDP server is listening (in bold):


root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>"c:\Program Files\Oracle\VirtualBox\VBoxManage.exe" showvminfo wef.windomain.local
| findstr /I vrde
VRDE:                        enabled (Address 0.0.0.0, Ports 5932, MultiConn: off, ReuseSingleConn: off, Authentication type: null)
...trimmed...

This should allow me to access the "screen" of the VM via port 5932 and the IP of the host laptop. Unfortunately, there is some sort of conflict, as I saw the domain controller also reserved the same port for its VRDE.

root@LAPTOP-HT4TGVCP C:\Users\root>"c:\Program Files\Oracle\VirtualBox\VBoxManage.exe" showvminfo dc.windomain.local | findstr /I vrde

VRDE:                        enabled (Address 0.0.0.0, Ports 5932, MultiConn: off, ReuseSingleConn: off, Authentication type: null)
...trimmed...

I encountered a similar issue with the domain controller also failing to resolve a conflict with the host system RDP port.

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>vagrant reload dc
==> dc: Attempting graceful shutdown of VM...
==> dc: Clearing any previously set forwarded ports...
==> dc: Fixed port collision for 22 => 2222. Now on port 2200.
==> dc: Clearing any previously set network interfaces...
==> dc: Preparing network interfaces based on configuration...
    dc: Adapter 1: nat
    dc: Adapter 2: hostonly
==> dc: Forwarding ports...
    dc: 3389 (guest) => 3389 (host) (adapter 1)
    dc: 22 (guest) => 2200 (host) (adapter 1)
    dc: 5985 (guest) => 55985 (host) (adapter 1)
    dc: 5986 (guest) => 55986 (host) (adapter 1)

I haven't solved these problems yet. I wonder if it's a result of using pre-built VMs, which use the 5.x series of Virtualbox extensions, while my Virtualbox system runs 6.0?

Summary

These are minor issues, for the result of all this work is four systems offering Windows client and server features, plus instrumentation. That could be another topic for discussion. I'm also excited by the prospect of running all of this in the cloud. Furthermore, Dustin Lee has a fork of DetectionLab that replaces some of the instrumentation with Security Onion!

World Economic Forum Recognizes Cyberattacks in Top Risks for 2019

The World Economic Forum (WEF) recently released The Global Risks Report 2019, ranking threats to cybersecurity among the top five risks facing society in the near future. The report presents the results of the WEF’s most recent Global Risks Perception Survey of nearly 1,000 respondents, and identifies challenges to consider for the year ahead, as well as potential future threats down the road.

Of the top risks identified by survey respondents, massive data fraud and theft was ranked the number-four global risk by likelihood over a 10-year horizon, followed closely by cyberattacks, which ranked fifth on the list. This is in trend with last year’s report, in which cyber risks were initially identified as a top concern, and likely remained on the list due to the large amount of data breaches that we saw over the past 12 months – both hardware and software.

Most recently, Marriott suffered a data breach that affected approximately 383 million guests, leaving their personally identifiable information – names, addresses, contact information, passport number, and more – exposed to cyber criminals. They certainly weren’t the only company who had a run-in with a cyberattack in 2018, and with the frequency that breaches are occuring, it’s only a matter of time before another significant attack reaches the news headlines.

Another area of consideration lies with the fact that we’re seeing an increasing integration of digital technologies into almost all aspects of our society, and there are even concerns about artificial intelligence being used to engineer more potent cyberattacks. With technological advances and new software emerging on a rapid basis, it’s no surprise that the potential for cyberattacks and data breaches remains a top concern for many in this digital landscape. This concern is deepened when we consider how extremely reliant we are on this technology, and how vulnerable much of the software in our world remains. According to Volume 9 of our State of Software Security report, more than 85 percent of all applications have at least one vulnerability in them, and more than 13 percent of applications have at least one very high severity flaw.

The WEF report specifically states that 82 percent of respondents expected “increased risks in 2019 of cyberattacks leading to theft of money and data,” and 80 percent seeing cyberattacks lead to a “disruption of operations.” This illustrates the awareness of the role that technology plays in shaping the global risk landscape for individuals, businesses, and governments alike.

While the future may look daunting, there are steps that you can take to start actively securing the software that your organization develops. Check out the below resources to get started, or schedule a demo to learn how we can help you secure your organization’s software.

From shopping to car design, our customers and partners spark innovation across every industry

Judson Althoff visits Kroger’s QFC store in Redmond, WA, one of two pilot locations featuring connected customer experiences powered by Microsoft Azure and AI. Also pictured, Wesley Rhodes, Vice President of Technology Transformation at Kroger.

Computing is embedded all around us. Devices are increasingly more connected, and the availability of data and information is greater now than it has ever been. To grow, compete and respond to customer demands, all companies are becoming digital. In this new reality, enterprise technology choices play an outsized role in how businesses operate, influencing how employees collaborate, how organizations ensure data security and privacy, and how they deliver compelling customer experiences.

This is what we mean when we talk about digital transformation. As our CEO Satya Nadella described it recently, it is how organizations with tech intensity adopt faster, best-in-class technology and simultaneously build their own unique digital capabilities. I see this trend in every industry where customers are choosing Microsoft’s intelligent cloud and intelligent edge to power their transformation.

Over just the past two months, customers as varied as Walmart, Gap Inc., Nielsen, Mastercard, BP, BlackRock, Fruit of the Loom and Brooks Running have shared how technology is reshaping all aspects of our lives — from the way we shop to how we manage money and save for retirement. At the Consumer Electronics Show (CES) earlier this month, Microsoft customers and partners highlighted how the Microsoft cloud, the Internet of Things (IoT) and artificial intelligence (AI) play an ever-expanding role in driving consumer experiences, from LGE’s autonomous vehicle and infotainment systems, to Visteon’s use of Azure to develop autonomous driving development environments, to ZF’s fleet management and predictive maintenance solutions. More recently, at the National Retail Federation (NRF) conference, Microsoft teamed up with retail industry leaders like Starbucks that are reimagining customer and employee experiences with technology.

In fact, there is no shortage of customer examples of tech intensity. They span all industries, including retail, healthcare, automotive manufacturing, maritime research, education and government. Here are just a few of my favorite examples:

Together with Microsoft, Kroger – America’s biggest supermarket chain – opened two pilot stores offering new connected experiences with Microsoft Azure and AI and announced a Retail as a Service (RaaS) solution on Azure. This partnership with Kroger resonates strongly with me because I first met with the company’s CEO in 2013 soon after joining Microsoft. Since then, I have witnessed the Kroger-Microsoft relationship grow and mature beyond measure. The pilot stores feature “digital shelves” which can show ads and change prices on the fly, along with a network of sensors that keep track of products and help speed shoppers through the aisles. Kroger may eventually roll out the Microsoft cloud-powered system in all its 2,780 supermarkets.

In the healthcare industry, earlier this month, we announced a seven-year strategic cloud partnership with Walgreens Boots Alliance (WBA). Through the partnership, WBA will harness the power of Microsoft Azure cloud and AI technology, Microsoft 365, health industry investments and new retail solutions with WBA’s customer reach, convenient locations, outpatient health care services and industry expertise to make health care delivery more personal, affordable and accessible for people around the world.

Pharmacy staff member with patient

Walgreens Boots Alliance will harness the power of Microsoft Azure cloud and AI technology and Microsoft 365 to help improve health outcomes and lower overall costs.

Customers tell us that one of the biggest advantages of working with Microsoft is our partner ecosystem. That ecosystem has brought together BevMo!, a wine and liquor store, and Fellow Inc., a Microsoft partner. Today, BevMo! is using Fellow Robots to connect supply chain efficiency with customer delight. Power BI, Microsoft Azure and AI enable the Fellow Robots to provide perfect product location using image recognition to offer customers different types of products by integrating point of sale interactions. BevMo! is also using Microsoft’s intelligent cloud solutions to empower its store associates to deliver better customer service.

Fellow Robots product in a retail store

Fellow Robots from partner Fellow, Inc. are helping BevMo! connect supply chain efficiency and better customer service. The robots are powered by Microsoft Azure, AI and Machine Learning.

In automotive, companies like Toyota are breaking new ground in mixed reality. With its HoloLens solution, Toyota can now project existing 3D CAD data used in the vehicle design process directly onto the vehicle for measurements, optimizing existing processes and minimizing errors. In addition, Toyota is trialing Dynamics 365 Layout to improve machinery layout within its facilities and Dynamics 365 Remote Assist to provide workers with expert support from off-site designers and engineers. Also, Toyota has deployed Surface devices, enabling designers and engineers to fluidly connect in real time as part of a company-wide investment to accelerate innovation through collaboration.

A Toyota engineer uses Microsoft HoloLens to perform a process called “film coating thickness inspection” to manage the thickness of the paint for consistent coating quality on every vehicle.
A Toyota engineer uses Microsoft HoloLens to perform a process called “film coating thickness inspection” to manage the thickness of the paint for consistent coating quality on every vehicle.

Digital transformation is also changing the way we learn. For example, in the education space, Law School Admission Council (LSAC), a non-profit organization devoted to law and education worldwide, announced its selection of the Windows platform on Surface Go devices to digitize the Law School Admission test (LSAT) for more than 130,000 LSAT test takers each year. In addition to the Digital LSAT, Microsoft is working with LSAC on several initiatives to improve and expand access to legal education.

Surface Go device
One of the thousands of Microsoft Surface Go devices running Windows 10 and proprietary software to facilitate a the modern and efficient Digital LSAT starting in July 2019.

Beyond manufacturing and retail, organizations are adopting the cloud and AI to reimagine environmental conservation. Fish may not be top of mind when thinking about innovation, but Australia’s Northern Territory is building its own technology to ensure the sustainable management of fisheries resources for future generations. For marine biologists, a seemingly straightforward task like counting fish becomes significantly more challenging or even dangerous when visibility in marine environments is low and when large predators (think: saltwater crocodiles) live in those environments. That is where AI comes in. Scientists use the technology to automatically identify and count fish photographed by underwater cameras. Over time, the AI solution becomes more accurate with each new fish analyzed. Greater availability of this technology may soon help other areas of the world improve their understanding of aquatic resources.

Shane Penny, Fisheries Research Scientist and his team using baited underwater cameras as part of Australia’s Northern Territory Fisheries artificial intelligence project with Microsoft to fuel insights in marine science.

Shane Penny, Fisheries Research Scientist and his team using baited underwater cameras as part of Australia’s Northern Territory Fisheries artificial intelligence project with Microsoft to fuel insights in marine science.

With almost 13,000 post offices and more than 134,000 employees, Poste Italiane is Italy’s largest distribution network. The organization delivers traditional mail and parcels but also operates at the digital frontier through innovation in financial and insurance services as well as mobile and digital payments solutions. Poste Italiane selected Dynamics 365 for its CRM, creating the largest online deployment in Italy. The firm sees the deployment as a critical part of its strategy to support growth, contain costs and deliver a better, richer customer experience.

Poste Italiane building
Poste Italiane’s selection of Microsoft is part of their digital transformation program that aims to reshape the retail sales approach and increase cross-selling revenues and profitability of its subsidiaries BancoPosta and PosteVita.

These examples only scratch the surface of how digital transformation and digital capabilities are bringing together people, data and processes in a way that generates value, competitive advantage and powers innovation across every industry. I am incredibly humbled that our customers and partners have chosen Microsoft to support their digital journey.

The post From shopping to car design, our customers and partners spark innovation across every industry appeared first on The Official Microsoft Blog.

Build security into your IoT plan or risk attack

The Internet of Things (IoT) is no longer some futuristic thing that’s years off from being something IT leaders need to be concerned with. The IoT era has arrived. In fact, Gartner forecasts there will be 20.4 billion connected devices globally by 2020.

An alternative proof point is the fact that when I talk with people about their company's IoT plans, they don’t look at me like a deer in headlights as they did a few years ago. In fact, often the term “IoT” doesn’t even come up. Businesses are connecting more “things” to create new processes, improve efficiency, or improve customer service.

As they do, though, new security challenges arise. One of which is there's no “easy button.” IT professionals can’t just deploy some kind of black box and have everything be protected. Securing the IoT is a multi-faceted problem with many factors to consider, and it must be built into any IoT plan.

To read this article in full, please click here

CYBER TERRORISM THROUGH SOCIAL MEDIA: A CATEGORICAL BASED PREVENTIVE APPROACH

This paper deals with the categorical cyber terrorism threats on social media and preventive approach to minimize their issues. While dealing with the threat of cyber terrorism in social media,  The United Nations Office for Drug and Crime (UNODC) categorical approach propaganda, financing, training, planning, execution, and cyber attacks are determined. In order to prevent, social spam, campaigns, misinformation, crowdturfing, and other practical techniques were mentioned with measures to be taken.

Heimdal Security Supports the ROC Leeuwenborgh Capture the Flag (CTF) Challenge

We, at Heimdal Security, are focused on educating both our readers and customers through actionable and useful blog articles, security alerts, protection guides, online courses, and other helpful resources designed to enhance cybersecurity awareness.

Through every project, education remains a core focus. Our goal is to make online security and privacy simple and accessible to anyone, and we think this starts with understanding the basics.

We also believe in the power of the cybersecurity community and supporting it as much as possible.

Every time we have the chance to support young and passionate students with their educational endeavors, encourage them to develop security skills and dive deeper into the world of cybersecurity, we do it without hesitation.

With that in mind, we are happy to announce that we are supporting the students from ROC Leeuwenborgh in the Netherlands by providing security software during their upcoming CTF competition.

A Capture the Flag (CTF) competition is hosted at cybersecurity conferences and aims to challenge participants to use their security skills and solve problems by capturing “flags” from the compromised computer networks.

This type of event requires players to register with the red or a blue team and includes a series of challenges that vary in terms of difficulties.

Students from the ROC Leeuwenborgh are competing for the blue team by trying to build their own CTF network with several “flags” and defend them against attacks of the red team.

They will present the network at the Security Congress event, “The Journey of the Digital Experience over cybersecurity” in Brightlands, Netherlands, which takes place on the 1st of February. During this event, there will be security talks held by students, consisting of different workshops, presentations, and demonstrations.

How the Capture the Flag network works

The Dutch students have built their own network using components and resources from their education center, which includes a Ubiquiti Access Point, a router, switches, the RedSocks Malicious Threat Detector (MTD), Network-attached storage (NAS) and 7 servers.

The Capture the Flag environment will be divided into three different levels: beginner, advanced and expert, and will focus on cracking the passwords of the Access Points (which will be encrypted) to get access to the network.

There will be two dedicated servers for the beginner level in which attendees (participants) will have to use the “EternalBlue” exploit used as part of the massive WannaCry ransomware in 2017.

During the CTF competition, participants will solve puzzles of different levels of difficulty (cryptographic, steghide, and others) to get a password that will help them unlock password-protected Word documents placed on the desktop. These docs will give them access to “flags” that will break two more servers in the beginner network. Participants will face more challenges trying to solve more puzzles before they access the advanced network.

Same methods apply for the “expert” level, but at this point, students will use the Heimdal Security’s security solution, Thor Premium Home, to monitor networks and make the challenge more competitive.

During this event, participants will have to prove knowledge of offensive and defensive skills for hacking and protecting networks, cryptography, exploitation, etc, and they will work in the team to face challenges and capture all the flags in the specified timeframe.

The best team will be rewarded with attractive prizes.

The Dutch Police also joins the CTF competition

It’s worth mentioning that an important partner of this event is the Dutch Police which will have a separate Capture the Flag network. Given the rise of advanced online threats, phishing, or DDoS attacks, the role of the Police and the IT specialists is indispensable for raising awareness about the importance of cybersecurity.

This CTF competition offers participants the opportunity to “become” IT specialists within the Dutch Police and use their hacking skills to collect as many flags as possible ahead of other competitors. It is mainly focused on cryptography and forensics, and participants are encouraged to bring their own device to join the competition. They will compete in the same environment and the winners will be rewarded with attractive prizes.

This is a great way for students, or professional and amateur hackers from the Dutch region and other places to learn hacking techniques, improve their problem-solving skills, and, most important, gain hands-on practice.

Bottom line, every Capture the Flag competition is a huge opportunity for students and people passionate about cybersecurity to think out of the box and test their team player skills.

If you have all these, take the leap and join the competition on the 1st of February!

More details about the event can be found here and information about the registration is here.

If you are interested in cybersecurity, or you want to embark on a career in infosec, you can always check out our free educational resources and learn how to better protect yourself against cybercriminals attacks.

The post Heimdal Security Supports the ROC Leeuwenborgh Capture the Flag (CTF) Challenge appeared first on Heimdal Security Blog.

No-deal Brexit and GDPR: here’s what you need to know

Business craves certainty and Brexit is currently giving us anything but. At the time of writing, it’s looking increasingly likely that Britain will leave the EU without a withdrawal agreement. This blog rounds up the latest developments on data protection after a no-deal Brexit. (Appropriately, we’re publishing on Data Protection Day, the international campaign to raise public awareness about privacy rights and protecting data.)

Under the General Data Protection Regulation, no deal would mean the UK will become a ‘third country’ outside of the European Economic Area. Last week, the Minister for Data Protection Pat Breen said a no-deal Brexit would have a “profound effect” on personal data transfers into the UK from the EU. Speaking at the National Data Protection Conference, he pointed out that although Brexit commentary has focused on trade in goods, services activity rely heavily on flows of personal data to and from the UK.

“In the event of a ‘no-deal’ Brexit, the European Commission has clarified that no contingency measures, such as an ‘interim’ adequacy decision, are foreseen,” the minister said.

This means personal data transfers can’t continue as they do today. At 11pm BST on Friday 29 March 2019, the UK will legally leave the European Union. All transfer of data between Ireland and the UK or Northern Ireland will then be considered as international transfers.

Keep calm and carry on

Despite the ongoing uncertainty, there are backup measures, as the Minister pointed out. “While Brexit does give rise to concerns, it should not cause alarm. The GDPR explicitly provides for mechanisms to facilitate the transfer of personal data in the event of the United Kingdom becoming a third country in terms of its data protection regime,” he said.

The latest advice from the Data Protection Commissioner is that Irish-based organisations will need to implement legal safeguards to transfer personal data to the UK after a no-deal Brexit. The DPC’s guidance outlined some typical scenarios if the UK becomes a third country.

“For example, if an Irish company currently outsources its payroll to a UK processor, legal safeguards for the personal data transferred to the UK will be required. If an Irish government body uses a cloud provider based in the UK, it will also require similar legal safeguards. The same will apply to a sports organisation with an administrative office in Northern Ireland that administers membership details for all members in Ireland and Northern Ireland,” it said.

Some organisations and bodies in Ireland will already be familiar with the legal transfer mechanisms available for the transfer of personal data to recipients outside of the EU, as they will already be transferring to the USA or India, for example.

Next steps for ‘third country’ status

BH Consulting’s senior data protection consultant Tracy Elliott says that data protection officers should take these steps to prepare for the UK’s ‘third country’ status under a no-deal Brexit.

·       review their organisation’s processing activities

·       identify what data they transfer to the UK

·       check if that includes data about EU citizens

“Consider your options of using a contract or possibly changing that supplier. If your data is hosted on servers in the UK, contact your hosting partner and find out what options are available,” she said.

Larger international companies may already have data sharing frameworks in place, but SMEs that routinely deal with UK, or that have subsidiaries in the UK, might not have considered this issue yet. All communication between them, even if they’re part of the same group structure, will need to be covered contractually for data sharing. “There are five mechanisms for doing this, but the simplest and quickest way to do this is to roll out model contract clauses, or MCCs. They are a set of guidelines issued by the EU,” Tracy advised.

Sarah Clarke, a specialist in privacy, security, governance, risk and compliance with BH Consulting, points out that using MCCs has its own risks. The clauses are due for an update to bring them into line with GDPR. Meanwhile the EU-US data transfer mechanism known as Privacy Shield is still not finalised, she added.

In the short term, however, MCCs are sufficient both for international transfers between legal entities in one organisation, and for transfers between different organisations. “For intra-group transfers, binding corporate rules are too burdensome to implement ‘just in case’. You can switch if the risk justifies it when there is more certainty,” Sarah Clarke said.

Further reading

The European Commission website has more information on legal mechanisms for transferring personal data to third countries. The UK Information Commissioner’s Office has a recent blog that deals with personal data flows post-Brexit. You can also check the Data Protection Commission site for details about transfer mechanisms and derogations for specific situations. The DPC also advises checking back regularly for updates between now and Brexit day.

The post No-deal Brexit and GDPR: here’s what you need to know appeared first on BH Consulting.

A Shortage in Common Sense: The Myth of the Talent Gap

I have a visceral reaction every time I encounter yet another article bemoaning the so-called "talent gap" or "labor gap" in cybersecurity. Having been in and out of the job market several times over the past decade (for better and, more often, for worse), I can honestly say this is utter nonsense. The roots of this clamor began more than a decade ago in DC as federal agencies grappled with modernizing, making use of the annual Sept/Oct budget season to decry how poor and helpless they were in order to justify demands for ever-increasing budgets. Local universities (such as UMUC) quickly caught on to the marketing plan and rapidly launched a cybersecurity degree program. Meanwhile, ISC2 helped ensure that the CISSP was a mandatory component for hiring in many positions.

While I am still in the midst of a job search (one that's a year old at this point), I find I need to speak out on the recent TechCrunch OpEd piece "Too few cybersecurity professionals is a gigantic problem for 2019" in order to address some of the nonsensical statements made that really have no business being taken seriously. The author does get a couple things right, but not enough to compensate for perpetuating many myths that need to be put to rest.

Allow me to start by addressing some sound-bites from the piece:

"Seasoned cyber pros typically earn $95,000 a year, often markedly more, and yet job openings can linger almost indefinitely. The ever-leaner cybersecurity workforce makes many companies desperate for help."

There are several reasons why positions often sit open for long periods of time: they require an existing clearance; hiring managers are obtusely fixated on experience with a very narrow list of tools (a tool is a tool is a tool!); recruiters aren't even passing resumes along to hiring managers, often because of a failure to find keywords, sometimes because of useless biases (e.g., I've had several short stints due to layoffs and projects being terminated - outside my control! - which is used to rule me out), or just as often because they don't have the first clue what they're looking for; positions are requiring "experience" with far too many things; the interview process focuses too much on tool fit rather than people fit, including failing to evaluate attitude, aptitude, and adaptability.

The bottom line here is this: if you see a position that's been open a long time, then that's a red flag. Something is broken in the hiring process. There are literally thousands (likely tens of thousands) of quality candidates on the market today with varying degrees of experience all trying to find work, and yet we cannot land these positions because of arbitrary requirements.

Oh, and by the way, one of those arbitrary requirements is geographical. If you have 2 or more offices in separate geographic areas, then you have an implicit "remote worker" policy, because a certain percentage of your workforce is working in a location separate from your primary HQ. Not everyone wants to live in big cities. Not everyone wants to move to key tech "capitals" like Silicon Valley or Austin, TX, or Seattle or NYC or DC or Boston. Those places are all expensive (in some cases very expensive) and, especially for junior hires, completely inaccessible financially. It is beyond time to support remote workers and introduce flexibility into the workplace. It's ironic that in 1998-2001, when there was also allegedly a labor shortage, companies were willing to do far more things to attract and retain talent. All of that has gone away since the recession in 2009. It's time to wake up and change.

"Between September 2017 and August 2018, U.S. employers posted nearly 314,000 jobs for cybersecurity pros."

Posting a job with "cybersecurity" (or comparable) in a title or description is a far cry from the position actually being oriented to cybersecurity. This is a situation that has worsened in the last few years. I encounter numerous "cybersecurity" roles that have little-to-nothing to do with cybersecurity. For example, it's very common to find "DevSecOps" positions that are acutely focused on DevOps automation. Or, sometimes they're just recast application security roles that got a trendy bump to "DevSecOps." Similarly, the "security architect" title has become a veritable grab bag of random terms, tools, and duties, and can be anything from a SOC analyst to hands-on engineer to manager to developer and so on.

Authors of job postings are really doing themselves and the labor pool a major disservice by failing to write clear, concise, accurate job postings. It's very common to encounter posts that list everything but the kitchen sink, not because they need actual direct experience with everything under the sun, but because they aspirationally believe that some day they might need those skills, or, worse, because they really need hire 5 people, but only got approval for 1 slot, and so they try to find a mythological being who's expert in secure coding, appsec, netsec, cloud security, container security, traditional infrastructure, cloud infrastructure, divination, unicorn taming, and budget mastery. Worse, they then start out interviews by asking if the candidate has experience with a handful of tools, and failing that, either drop the candidate (because oooOOOOooo there's magic in big security vendor tools) or force them to continue through a process that reveals an increasingly bad fit.

And now, the kicker: You shouldn't be hiring this many security people anyway! There's a delicious irony to being interviewed for a dedicated and growing cybersecurity team/program that espouses "build security in" ideology. If your org is really so interested in building security into everything, then quit trying to create massive cybersecurity teams/programs that only lead to failed old enablement practices and "otherness" that actually alienates your internal clients and decreases security. But I digress...

"Companies are trying to cope in part by relying more aggressively on artificial intelligence and machine learning, but this is still at a relatively nascent stage and can never do more than mitigate the problem."

First, never say never, m'kay? That's just silly. Second, while vendors are aggressively pushing AI/ML solutions, most of it isn't even AI or ML (it's amazing how many products are just elaborate regex schemes under the hood!). The phrase "snake oil" comes to mind. Third - and this is very important! - the focus should absolutely, positively be on automation and orchestration today. There are tons of things that can be automated, and there is a growing pool of reasonably qualified candidates with experiencing using generic A&O tools (e.g., ansible, puppet, chef, etc.).

The key takeaway here is this: AI/ML is an easy target for throwing stones, but the comment obscures an important lesson, which is that organizations are not doing enough with automation and orchestration, especially as it pertains to security. This reality needs to be remedied ASAP!

"These are ideal candidates, but, in fact, the backgrounds of budding cyber pros need not be nearly this good."

There is no perfect, and perfect is the enemy of good. Hiring managers, HR, and recruiters: pay attention! You. Should. Be. Hiring. For. People. Fit. And. Aptitude. FULL STOP. If you're having trouble "finding good candidates," then YOU ARE THE PROBLEM. I could rant endlessly on this point, but won't. Introspection, please.

"Almost no cybersecurity pro over 30 today has a degree in cybersecurity and many don't even have degrees in computer science."

Mmmmmmmmmmaaaaybe. I'm over 30. I have an undergrad in CompSci. I have a Master's degree in Engineering Mgmt with a concentration in InfoSec Mgmt. Also, the older millenials are now hitting their 30s. Cybersecurity (or comparable) degrees have been around for 15+ years. This statement is in many ways demonstrably false, but more important IT DOESN'T MATTER ONE BIT!

The problem, again, is with the hiring process, including having arbitrary "requirements" that artificially shrink the labor pool (which is the point the author seems to be making here). QUIT HIRING BASED ON A PUNCH LIST! Sing it with me: attitude, aptitude, and adaptability! These are the key qualities you should be seeking in the majority of hires.

Here's a perfect example: I interviewed in mid-2018 for a "security architect" role that had been open for a very long time (red flag!). When I hopped on what I thought was a quick intro call with the hiring manager, I was instead met with the hiring manager and 2 reports (red flag!). The 2 reports gushed over how awesome the hiring manager was to work for (odd), and then they launched into questions. Every single question was about hadoop security, even though the first question they asked was "do you have extensive experience securing hadoop?" to which I answered "none, really, but it's just a NOSQL data store, so *shrug*." Moreover, the hiring manager was a total jerk on the call (not sure if this was being done as a stress test tactic or because the guy was just a jerk). I would be asked a question, I would start to answer (literally, I'd just get a couple words out of my mouth, like "Well, for starters...") and the hiring manager would jump in, tell me my answer was insufficient (I hadn't even answered yet!), and then demand I "get to the point." Suffice to say, I cut the interview off and then provided strong feedback to the third-party recruiter to run away.

There are 2 lessons from this experience: 1) The job description (JD) was completely and wholly inadequate. While it mentioned hadoop experience as a requirement, it became immediately clear that they didn't so much want a security architect as they wanted a hadoop expert (go get a contractor - sheesh!). 2) Don't be jerks to candidates! If that hiring manager is allowed to exist and persist within that organization, then that is absolutely not a place I would ever consider working (and have avoided applying or being submitted there ever since).

Key takeaways: If you're having trouble finding candidates, make sure the JD is accurate, and make sure your hiring manager is doing a good job representing the company. It's still a small industry and many of us talk and share stories. Wanna kill your applicant pool? Become known as a horrible place to work that's filled with belligerents and "brilliant jerks." I'm a big fan of Reed Hastings' (Netflix) "no brilliant jerks" policy. Hugely and most biggestly important.

"Asking too much from prospective pros isn't the only reason behind the severe cyber manpower shortage."

Perhaps not, but it's a major factor in hiring decisions. If you cannot offer any semblance of work-life balance, especially for your experienced hires who may very well have families, then you need to re-evaluate your org culture. Moreover, organizations must immediately stop trying to hire single resources to fill 5 different roles. These candidates are rare, if they exist at all, and it's killing your hiring process. More importantly, it means you don't actually know your priorities, AND... it says you're not willing to invest in your people to help them develop into the retainable talent you so desperately need. Once again, it's time for some serious introspection here!

"One key finding was that 43% of those polled said their organization provides inadequate security training resources, heightening the possibility of a breach."

Ya gotta love the orthogonal throw-away quip... this comment has nothing to do with the "labor gap," nor is it about the challenges of tech hiring. This point actually pertains directly to organizational culture. At face, it's true, insomuch as organizations tend to over-rely on annual security (and privacy) training, among other things. However, what it really reflects is a huge problem with pretty much all organizations in that they don't really make security a priority, they don't make it a shared responsibility, and they don't hire the right people in HR, org dev, or security to help executive leadership transform org culture in a favorable and necessary manner.

"IBM, for example, creates what it calls "new collar" jobs, which prioritize skills, knowledge and willingness to learn over degrees."

"Technology companies still must work much harder to broaden their range of potential candidates, seeking smart, motivated and dedicated individuals who would be good teammates."

To close on something a bit more positive, I very much agree with and appreciate these points. But, again, this is all about organizations needing to fix themselves, and ASAP at that. If you think hiring for a cybersecurity role is purely about running down a list of arbitrary "requirements" and only accepting candidates who meet all (or most) of them, then you're failing. I've mentioned it several times throughout my post here, and I'll say it once again: Hire for attitude, aptitude, and adaptability!!! If you don't know how to do this, then get educated and fix your hiring process.

The analogy I've used of late is this: A car repair shop does not hire a mechanic simply because they know how to use metric vs. standard/imperial wrenches. No sane person would say "oh, I'm sorry, you only know how to use wrenches in millimeter sizes, but we need someone who can use a wrench in fractions of inches." Think about that for a second! How insane would that be?! And yet... this is exactly how the vast majority of orgs are trying to hire tech talent. "Oh, I'm sorry, you've worked with Symantec, but not McAfee or Trend? We need someone experienced with those other brands." Or, "Oh, we're a Rapid7 shop here, so I don't see how your Tenable (or Qualys) experience really applies." Or, "When were you last 'hands-on' in a role? Oh, I see, it's been a few years? Well, thanks for your time..." Etc. Etc. Etc.

These are all things I have experienced first-hand in the past year. Tech is tech, tools are tools, and the most important thing is my willingness and ability to learn and adapt. But, alas, very few organizations want to invest in their people. Very few organizations know how to interview for attitude, aptitude, and adaptability. It's truly sad, and I think it's a skill that organizations have actually lost in the last 10-15 years. I had a great job with AOL, and I landed it not because I had experience with every security tool on the market, but because I had a solid base technical knowledge and I had the attitude, aptitude, and adaptability to quickly learn and apply new things. THIS HAS BEEN LOST IN TODAY'S JOB MARKET.

---
To close this ranty post out, I just want to reiterate, for the umpteenth time, that I strongly believe the "talent gap" or "labor shortage" is largely imagined and manufactured because organizations don't know how to hire, make absolutely no commitment to train and retain their people, and have in general completely lost their way. It's very sad and very troubling. We used to know how to do this! Where have all these skills gone within HR and management?

Part of these issues are a direct result of cuts made during previous economic down-turns, but I also suspect that we're seeing the "day-trader" mentality as it hits hiring, too. In this age of 24x7 news and pervasive, ubiquitous social media, and endless amounts of raw outrage... we have lost our humanity within organizations. Human resources has always ultimately been about protecting organizations from their people, but it has really gotten broken badly in the past decade. Hiring managers are often forced to do too much with too little, all while being stuck following grossly outmoded thinking and strategies (e.g., if you build a SOC today thinking people first, then automation and orchestration, then I'm sorry to say that you're already starting 10 yrs behind the curve).

If you're trying to hire people, then you need to force introspection and open dialogue within your organization, and you need to DO IT NOW. I'm a GenX'er. I want to do good work with a good org and good team where I'm treated respectfully, but allowed work-life balance. I would like to have some meaning in my job. Younger generations are reportedly even more concerned about this last point, wanting to contribute meaningfully. Once upon a time, I was told by a higher-up that corporations could not exist if they weren't benefiting the general good of society. I'm not completely sure this is true, but I would love for it to be so. However, in application, what this means is that organizations must also take care of their people, which many are failing at today. Forget about all the various movements and management fads out there and take this to heart: If you want good employees who will stick with you, then you have to hire good people AND TREAT THEM RIGHT. It really is just that simple.

As a closing remark, I strongly recommend that people go read Laloux's Reinventing Organizations as it is remarkable and a necessary evolution in business management.

Addendum (1/31/19): One additional observation: Numbers lie. I have found here in the DC market that many jobs get reposted multiple times by placement/search firms. Positions, for example, with major firms like Fannie, Freddie, ManTech, DHS, CapOne, etc., will often show up a dozen times or more, but listed by the headhunter firms and not the actually hiring company. So, imagine that out of, say, 300k job postings for "cybersecurity," that number may actually be closer to 25-30k in real jobs. Quite shocking to think about and realize, and as a job searcher it's extremely frustrating. I'll literally get a flurry of inquiries from a half dozen or more recruiters when a new position posts. Crazy.

Privacy and Security by Design: Thoughts for Data Privacy Day

Data Privacy Day has particular relevance this year, as 2018 brought privacy into focus in ways other years have not. Ironically, in the same year that the European Union’s (EU) General Data Protection Regulation (GDPR) came into effect, the public also learned of glaring misuses of personal information and a continued stream of personal data breaches. Policymakers in the United States know they cannot ignore data privacy, and multiple efforts are underway: bills were introduced in Congress, draft legislation was floated, privacy principles were announced, and a National Institute of Standards and Technology (NIST) Privacy Framework and a National Telecommunications and Information Administration (NTIA) effort to develop the administration’s approach to consumer privacy are in process.

These are all positive steps forward, as revelations about widespread misuse of personal data are causing people to mistrust technology—a situation that must be remedied.

Effective consumer privacy policies and regulations are critical to the continued growth of the U.S. economy, the internet, and the many innovative technologies that rely on consumers’ personal data. Companies need clear privacy and security expectations to not only comply with the diversity of existing laws, but also to grow businesses, improve efficiencies, remain competitive, and most importantly, to encourage consumers to trust organizations and their technology.

If an organization puts the customer at the core of everything it does, as we do at McAfee, then protecting customers’ data is an essential component of doing business. Robust privacy and security solutions are fundamental to McAfee’s strategic vision, products, services, and technology solutions. Likewise, our data protection and security solutions enable our enterprise and government customers to more efficiently and effectively comply with regulatory requirements.

Our approach derives from seeing privacy and security as two sides of the same coin. You can’t have privacy without security. While you can have security without privacy, we strongly believe the two should go hand in hand.

In comments we submitted to NIST on “Developing a Privacy Framework,” we made the case for Privacy and Security by Design. This approach requires companies to consider privacy and security on the drawing board and throughout the development process for products and services going to market. It also means protecting data through a technology design that considers privacy engineering principles. This proactive approach is the most effective way to enable data protection because the data protection strategies are integrated into the technology as the product or service is created. Privacy and Security by Design encourages accountability in the development of technologies, making certain that privacy and security are foundational components of the product and service development processes.

The concept of Privacy and Security by Design is aspirational but is absolutely the best way to achieve privacy and security without end users having to think much about them. We have some recommendations for organizations to consider in designing and enforcing privacy practices.

There are several layers that should be included in the creation of privacy and data security programs:

  • Internal policies should clearly articulate what is permissible and impermissible.
  • Specific departments should specify further granularity regarding policy requirements and best practices (e.g., HR, IT, legal, and marketing will have different requirements and restrictions for the collection, use, and protection of personal data).
  • Privacy (legal and non-legal) and security professionals in the organization must have detailed documentation and process tools that streamline the implementation of the risk-based framework.
  • Ongoing organizational training regarding the importance of protecting personal data and best practices is essential to the continued success of these programs.
  • The policy requirements should be tied to the organization’s code of conduct and enforced as required when polices are violated.

Finally, an organization must have easy-to-understand external privacy and data security policies to educate the user/consumer and to drive toward informed consent to collect and share data wherever possible. The aim must be to make security and privacy ubiquitous, simple, and understood by all.

As we acknowledge Data Privacy Day this year, we hope that privacy will not only be a talking point for policymakers but that it will also result in action. Constructing and agreeing upon U.S. privacy principles through legislation or a framework will be a complicated process. We better start now because we’re already behind many other countries around the globe.

The post Privacy and Security by Design: Thoughts for Data Privacy Day appeared first on McAfee Blogs.

Sharing Isn’t Always Caring: 3 Tips to Help Protect Your Online Privacy

It’s 2019 and technology is becoming more sophisticated and prevalent than ever. With more technology comes greater connectivity. In fact, by 2020, there will be more than 20 billion internet-connected devices around the world. This equates to more than four devices per person. As we adopt new technology into our everyday lives, it’s important to consider how this emerging technology could lead to greater privacy risks if we don’t take steps to protect our data. That’s why the National Cyber Security Alliance (NCSA) started Data Privacy Day to help create awareness surrounding the importance of recognizing our digital footprints and safeguarding our data. To further investigate the impact of these footprints, let’s take a look at how we perceive the way data is shared and whose responsibility it is to keep our information safe.

The Impact of Social Media

Most of us interact with multiple social media platforms every day. And while social media is a great way to update your friends and family on your daily life, we often forget that these platforms also allow people we don’t really know to glimpse into our personal lives. For example, 82% of online stalkers use social media to find out information about potential victims, such as where they live or where they go to school. In other words, social media could expose your personal information to users beyond your intended audience.

Certain social media trends also bring up issues of privacy in the world of evolving technology. Take Facebook’s 10-year challenge, a recent viral trend encouraging users to post a side-by-side image of their profile pictures from 2009 and 2019. As WIRED reporter Katie O’Neill points out, the images offered in this trending challenge could potentially be used to train facial recognition software for age progression and age recognition. While the potential of this technology is mostly mundane, there is still a risk that this information could be used inequitably.

How to Approach Requests for Personal Data

Whether we’re using social media or other online resources, we all need to be aware of what personal data we’re offering out and consider the consequences of providing the information. While there are some instances where we can’t avoid sharing our personal data, such as for a government document or legal form, there are other areas where we can stand to be a little more conservative with the data that we divulge. For example, many of us have more than just our close family and friends on our social networks. So, if you’re sharing your location on your latest post, every single person who follows you has access to this information. The same goes for those online personality quizzes. While they may be entertaining, they put an unnecessary amount of your personal information out in the open. This is why it’s crucial to be thoughtful of how your data is collected and stored.

So, what steps can you take to better protect your online privacy? Check out the following tips to help safeguard your data:

  • Think before you post. Before tagging your friends on Instagram, sharing your location on Facebook, or enabling facial recognition, consider what this information reveals and how it could be used by a third-party.
  • Set privacy and security settings. If you don’t want the entire World Wide Web to be able to access your social media, turn your profiles to private. You can also go to your device settings and choose which apps or browsers you want to share your location with and which ones you don’t.
  • Enable two-factor authentication. In the chance your data does become exposed, a strong, unique password can help prevent your accounts from being hacked. Furthermore, you can implement two-factor authentication to stay secure. This will help strengthen your online accounts with a unique, one-time code required to log in and access your data.

And, of course, to stay on top of the latest consumer and mobile security threats, be sure to follow @McAfee_Home on Twitter, listen to our podcast Hackable? and ‘Like’ us on Facebook.

The post Sharing Isn’t Always Caring: 3 Tips to Help Protect Your Online Privacy appeared first on McAfee Blogs.

The ELEVENTH Annual Disaster Recovery Breakfast: Is that you Caesar?

Posted under: General

Things have been good in security. Really good. For a really long time. We can remember when there were a couple hundred people that showed up for the RSA Conference. Then a couple thousand. Now over 40,000 people descend on San Francisco to check out this security thing. There are hundreds of companies talking cyber. VC money has flowed for years, funding pretty much anything cyber. Cyber cyber cyber.

But alas, being middle-aged fellows, we know that all good things come to an end. OK, maybe not an end, but certainly a hiccup or two. Is 2019 the year we see the security market slow a bit? Is there a reckoning upon us, as we hypothesized on a recent Firestarter? Will we finally be able to get a room at any of the hotels in SF during RSA week? We at Securosis tend to be a lot better at figuring out market direction than timing. But we aren’t taking any chances.

So our plan is to party it up while we still can. And that means hosting the Disaster Recovery Breakfast once again. We can’t promise that Brutus will make an appearance, but Rich, Adrian, and Mike will certainly be there. And you’ll also be able to check out the progress we’ve made at DisruptOps. It’s pretty astounding if we do say so ourselves. It seems scaling cloud security and operations continues to be challenging for folks. Shocker!

We remain grateful that so many of our friends, clients, and colleagues enjoy a couple hours away from the insanity that is the RSAC. By Thursday it’s very nice to have a place to kick back, have some quiet conversations, and grab a nice breakfast. Or don’t talk to anyone at all and embrace your introvert – we get that too.

The DRB happens only because of the support of our long-time supporters CHEN PR, LaunchTech, CyberEdge Group, and our media partner Security Boulevard. We’re excited to welcome Guyer Group and Babel PR to the family as well. Please make sure to say hello and thank them for helping support your recovery.

As always the breakfast will be Thursday morning of RSA Week (March 7) from 8-11 at Tabletop Tap House in the Metreon (fka Jillian’s). It’s an open door – come and leave as you want. We will have food, beverages, and assorted non-prescription recovery items to ease your day. Yes, the bar will be open. Mike has officially become an old guy and can only drink decaf coffee (high blood pressure, sigh), but you can be sure there will be a little something-something in his Joe.

Please remember what the DR Breakfast is all about. No spin, no magicians and Rich will not be in his Star Wars costume (we think) -– it’s just a quiet place to relax and have muddled conversations with folks you know, or maybe even go out on a limb and meet someone new. We are confident you will enjoy the DRB as much as we do.

To help us estimate numbers, please RSVP to rsvp (at) securosis (dot) com.

- Mike Rothman (0) Comments Subscribe to our daily email digest

Egregious Misconduct Lawsuit For 2014 Yahoo Security Management

It was the 10th of March 2014, the bugles were blaring. A red carpet was unrolled. Who was this man of mystery coming into view? He came with no prior CSO experience, let alone large operation skills. Suddenly out of nowhere, front and center of Yahoo’s own financial news site was the answer: Watch out, … Continue reading Egregious Misconduct Lawsuit For 2014 Yahoo Security Management

Rethinking the detection of child sexual abuse imagery on the internet

A critical part of child sexual abuse criminal world is the creation and distribution of child sexual abuse imagery (CSAI) on the Internet. To combat this crime efficiently and illuminate current defense short-coming, it is vital to understand how CSAI content is disseminated on the Internet. Despite the importance of the topic very little work was done on the subject so far.

To fill this gap and provide a comprehensive overview of the current situation we conducted the first longitudinal measurement study of CSAI distribution across the Internet. In collaboration with the National Center for Missing and Exploited Children (NCMEC)—a United States clearinghouse for all CSAI content detected by the public and US Internet services—we examined the metadata associated with 23.4M CSAI incidents of CSAI from the 1998–2017 period.

This talk starts by summarizing the key insights we garnered during this study about how CSAI content distribution evolved. In particular we will cover how Internet technologies have exponentially accelerated the pace of CSAI content creation and distribution to a breaking point in the manual review capabilities of NCMEC and law enforcement.

Then we will delve into the most pressing challenges that need to be addressed to be able to keep up with the steady increase of CSAI content and outline promising directions to help meet those challenges.

NBlog Jan 28 – creative technical writing


"On Writing and Reviewing ..." is a fairly lengthy piece written for EDPACS (the EDP Audit, Control, and Security Newsletter) by Endre Bihari. 

Endre discusses the creative process of writing and reviewing articles, academic papers in particular although the same principles apply more widely - security awareness briefings, for example, or training course notes. Articles for industry journals too. Even scripts for webcasts and seminars etc. Perhaps even blogs.

Although Endre's style is verbose and the language quite complex in places, I find his succinct bullet point advice to reviewers more accessible, for example on the conclusion section he recommends:
  • Are there surprises? Is new material produced?
  • How do the results the writer arrived at tie back to the purpose of the paper?
  • Is there a logical flow from the body of the paper to the conclusion?
  • What are the implications for further study and practice?
  • Are there limitations in the paper the reader might want to investigate? Are they pointed at sufficiently?
  • Does the writing feel “finished” at the end of the conclusion?
  • Is the reader engaged until the end?
  • How does the writer prompt the reader to continue the creative process?
I particularly like the way Endre emphasizes the creative side of communicating effectively. Even formal academic papers can be treated as creative writing. In fact, most would benefit from a more approachable, readable style. 

Interestingly, Endre points out that the author, reviewer and reader are key parties to the communication, with a brief mention of the editor responsible for managing the overall creative process. Good point!

Had I been asked to review Endre's paper, I might have suggested consolidating the bullet-points into a checklist, perhaps as an appendix or a distinct version of his paper. Outside of academia, the world is increasingly operating on Internet time due, largely, to the tsunami of information assaulting us all. Some of us want to get straight to the point, first, then if our interest has been piqued, perhaps explore in more detail from there which suggests the idea of layering the writing, more succinct and direct at first with successive layers expanding on the depth. [Endre does discuss the abstract (or summary, executive summary, precis, outline or whatever but I'm talking here about layering the entire article.]

Another suggestion I'd have made is to incorporate diagrams and figures, in other words using graphic images to supplement or replace the words. A key reason is that many of us 'think in pictures': we find it easier to grasp concepts that are literally drawn out for us rather than (just) written about. There is an art to designing and producing good graphics, though, requiring a set of competencies or aptitudes distinct from writing. 

Graphics are especially beneficial for technical documentation including security awareness materials, such as the NoticeBored seminar presentations and accompanying briefing papers. We incorporate a lot of graphics such as:
  • Screen-shots showing web pages or application screens such as security configuration options;
  • Graphs - pie-charts, bar-charts, line-charts, spider or radar diagrams etc. depending on the nature of the data;
  • Mind-maps separating the topic into key areas, sometimes pointing out key aspects, conceptual links and common factors;
  • Process flow charts;
  • Informational and motivational messages with eye-catching photographic images;
  • Conceptual diagrams, often mistakenly called 'models' [the models are what the diagrams attempt to portray: the diagrams are simply representational];
  • Other diagrams and images, sometimes annotated and often presented carefully to emphasize certain aspects.
Also, by the way, we use buttons, text boxes, colors and various other graphic devices to pep-up our pieces, for example turning plain (= dull!) bullet point lists into structured figures like this slide plucked from next month's management-level security awareness and training seminar on "Mistakes":

So, depending on its intended purpose and audience, a graphical version of Endre's paper might have been better for some readers, supplementing the published version. At least, that's my take on it, as a reviewer and tech author by day. YMMV


NBlog Jan 27 – streaming awareness content

As the materials fall into place for "Mistakes", our next security awareness module, it's interesting to see how the three content streams have diverged:

  • For workers in general, the materials emphasize making efforts to avoid or at least reduce the number of mistakes involving information such as spotting and self-correcting typos and other simple errors.
  • For managers, there are strategic, governance and information risk management aspects to this topic, with policies and metrics etc.
  • For professionals and specialists, error-trapping, error-correction and similar controls are of particular interest.
The 'workers' audience includes the other two, since managers and pro's also work (quite hard, usually!), while professional/specialist managers (such as Information Risk and Security Managers) belong to all three audiences. In other words, according to someone's position or role in the organization, there are several potentially relevant aspects to the topic.

That's what we mean by 'streaming'. It's not (just) about delivering content via streaming media: the audiences matter.

#PrivacyAware: Will You Champion Your Family’s Online Privacy?

online privacyThe perky cashier stopped my transaction midway to ask for my email and phone number.

Not now. Not ever. No more. I’ve had enough. I thought to myself.

“I’d rather not, thank you,” I replied.

The cashier finished my transaction and moved on to the next customer without a second thought.

And, my email and phone number lived in one less place that day.

This seemingly insignificant exchange happened over a year ago, but it represents the day I decided to get serious and champion my (and my family’s) privacy.

I just said no. And I’ve been doing it a lot more ever since.

A few changes I’ve made:

  • Pay attention to privacy policies (especially of banks and health care providers).
  • Read the terms and conditions of apps before downloading.
  • Block cookies from websites.
  • Refuse to purchase from companies that (appear to) take privacy lightly.
  • Max my privacy settings on social networks.
  • Change my passwords regularly and keep them strong!
  • Delete apps I no longer use.
  • Stay on top of software updates on all devices and add extra protection.
  • Have become hyper-aware before giving out my email, address, phone number, or birth date.
  • Limit the number of photos and details shared on social media.

~~~

The amount of personal information we share every day online — and off — is staggering. There’s information we post directly online such as our birth date, our location, our likes, and dislikes. Then there’s the data that’s given off unknowingly via web cookies, Metadata, downloads, and apps.

While some data breaches are out of our control, at the end of the day, we — along with our family members — are one giant data leak.

Studies show that on average by the age of 13, parents have posted 1,300 photos and videos of their child to social media. By the time kids get devices of their own, they are posting to social media 26 times per day on average — a total of nearly 70,000 posts by age 18.

The Risksonline privacy

When we overshare personal data a few things can happen. Digital fallout includes data misuse by companies, identity theft, credit card fraud, medical fraud, home break-ins, reputation damage, location and purchasing tracking, ransomware, and other risks.

The Mind Shift

The first step toward boosting your family’s privacy is to start thinking differently about privacy. Treat your data like gold (after all, that’s the way hackers see it). Guiding your family in this mind-shift will require genuine, consistent effort.

Talk to your family about privacy. Elevate its worth and the consequences when it’s undervalued or shared carelessly.

Teach your kids to treat their personal information — their browsing habits, clicks, address, personal routine, school name, passwords, and connected devices — with great care. Consider implementing this 11 Step Privacy Take Back Plan.

This mind and attitude shift will take time but, hopefully, your kids will learn to pause and think before handing over personal information to an app, a social network, a retail store, or even to friends.

Data Protection Tips*

  1. Share with care. Think before posting about yourself and others online. Consider what it reveals, who might see it and how it could be perceived now and in the future.
  2. Own your online presence. Set the privacy and security settings on websites and apps to your comfort level for information sharing. Each device, application or browser you use will have different features to limit how and with whom you share information.online privacy
  3. Think before you act. Information about you, such as the games you like to play, your contacts list, where you shop and your geographic location, has tremendous value. Be thoughtful about who gets that information and understand how it’s collected through websites and apps.
  4. Lock down your login. Your usernames and passwords are not enough to protect critical accounts like email, banking, and social media. Strengthen online accounts and use strong authentication tools like a unique, one-time code through an app on your mobile device.

* Provided by the National Cyber Security Alliance (NCSA).

January 28 National Data Privacy Day. The day highlights one of the most critical issues facing families today — protecting personal information in a hyper-connected world. It’s a great opportunity to commit to taking real steps to protect your online privacy. For more information on National Data Privacy Day or to get involved, go to Stay Safe Online.

The post #PrivacyAware: Will You Champion Your Family’s Online Privacy? appeared first on McAfee Blogs.

The Emergence of Geopolitical Fuelled Cyber Attacks

A new breed of cyberattack is emerging into the threat landscape, fuelled by geopolitical tension, there has been a rise in stealthy and sophisticated cyber attacks reported within recent industry reports. Carbon Blacks 2019 Global Threat Report, released on Wednesday (23/1/19), concluded global governments experienced an increase in cyberattacks during 2018 stemming from Russia, China and North Korea, while nearly 60% of all attacks involved lateral movement.

'Lateral Movement' is where an attacker progressively and stealthy moves through a victim's network as to find their targets, which are typically datasets or critical assets. This is an attack of sophistication, requiring skill, resources and persistence, beyond the interest of average criminal hackers, whom go after the lowest hanging fruit for an easier financial return.


Carbon Black concluded that as 2018 came to a close, China and Russia were responsible for nearly half of all cyberattacks they detected. 

The US and UK government agencies have publicly articulated their distrust of Chinese tech giant Huawei, which resulted in BT removing Huawei IT kit from their new 5G and existing 4G networks last month. UK Defence Secretary Gavin Williamson said he had "very deep concerns" about Huawei being involved with the new UK mobile network due to security concerns. At end of 2017 the UK National Cyber Security Centre warned government agencies against using Kaspersky's products and services, which followed a ban by the US government. Barclays responded by removing their free offering of Kaspersky anti-virus its customers. The UK and US also blamed North Korea for the devastating WannaCry attacks in 2017.

Another interesting stat from the Carbon Black Global Threat Report that caught the eye, was 2018 saw an approximate $1.8 billion worth of cryptocurrency-thefts, which underlines the cyber-criminal threat still remains larger than ever within the threat landscape.

What is a firewall? How they work and how they fit into enterprise security

Firewalls been around for three decades, but they’ve evolved drastically to include features that used to be sold as separate appliances and to pull in externally gathered data to make smarter decisions about what network traffic to allow and what traffic to block.

Now just one indespensible element in an ecosystem of network defenses, the latest versions are known as enterprise firewalls or next-generation firewalls (NGFW) to indicate who should use them and that they are continually adding functionality.

What is a firewall?

A firewall is a network device that monitors packets going in and out of networks and blocks or allows them according to rules that have been set up to define what traffic is permissible and what traffic isn’t.

To read this article in full, please click here

Why You Need to Block the Threat Factory. Not Just the Threats.

 

Cyber criminals will create roughly 100 million new malware variants over the next 12 months. Security vendors will respond with new malware signatures and behaviors to stop them, but thousands of companies will be victimized in the process, experiencing costly or catastrophic breaches. This isn’t new - it’s a cycle.

What is a supply chain attack? Why you should be wary of third-party providers

A supply chain attack, also called a value-chain or third-party attack, occurs when someone infiltrates your system through an outside partner or provider with access to your systems and data. This has dramatically changes the attack surface of the typical enterprise in the past few years, with more suppliers and service providers touching sensitive data than ever before.

Symbolic Path Merging in Manticore

Each year, Trail of Bits runs a month-long winter internship “winternship” program. This year we were happy to host 4 winterns who contributed to 3 projects. This is the first in a series of blog posts covering the 2019 Wintern class.

Our first report is from Vaibhav Sharma (@vbsharma), a PhD student at the University of Minnesota. Vaibhav’s research focuses on improving symbolic executors and he took a crack at introducing a new optimization to Manticore:

Symbolic Path Merging in Manticore

My project was about investigating the use of path-merging techniques in Manticore, a symbolic execution engine that supports symbolic exploration of binaries compiled for X86, X64, and Ethereum platforms. A significant barrier for symbolic exploration of many practical programs is path explosion. As a symbolic executor explores a program, it encounters branch instructions with two feasible sides. The symbolic executor needs to explore both sides of the branch instruction. Manticore explores such branch instructions by forking the path that reached the branch instruction into two paths each of which explores a feasible side. A linear increase in the number of branch instructions with both sides feasible causes an exponential increase in the number of paths Manticore needs to explore through the program. If we hit enough of these branch conditions, Manticore may never finish exploring all the states.

Path merging reduces the number of paths to be explored. The central idea is to merge paths at the same program location that are similar. Manticore uses the notion of a “state” object to capture the processor, memory, and file system information into a single data structure at every point of symbolic exploration through a program. Hence, path merging can be specialized to “state merging” in Manticore where merging similar states that are at the same program location leads to an exponential reduction in the number of paths to explore. With a simple program, I observed Manticore could cut its number of explored execution paths by 33% if it merged similar states at the same program location.

State merging can be implemented statically or dynamically. Static state merging explores the control-flow graph of the subject program in topological order and merges states at the same program location when possible. Veritesting is a path-merging technique that is similar to static state merging, it requires paths to be at same program location to merge them. Dynamic state merging does not require two states to be at the same program location for them to be considered for merging. Given two states a1, a2 at different program locations l1, l2 respectively, if a transitive successor a1′ of a1 has a high and beneficial similarity to a2, dynamic state merging fast-forwards a1 to a1′ and merges it with a2. The fast-forwarding involves overriding the symbolic executor’s search heuristic to reach l2. Dynamic state merging uses the intuition that if two states are similar, their successors within a few steps are also likely to be similar.

While it is possible to implement either state merging technique in Manticore, I chose dynamic state merging as described by Kuznetsov et al. as a better fit for Manticore’s use of state-based instead of path-based symbolic executors. Also, static state merging is less suited for symbolic exploration guided towards a goal and more suited for exhaustive exploration of a subject program. Since static state merging can only merge states at the same program location, when directed towards a goal it tends to cover less code than dynamic state merging in the same time budget. This was also a conclusion of Kuznetsov et al (see Figure 8 from their paper below). Since we often tend to use symbolic execution to reach an exploration goal, static state merging is less suited to our needs.

DSM and SSM vs KLEE coverage

Dynamic State Merging (DSM) provided more statement coverage than Static State Merging (SSM). Figure from “Efficient state merging in symbolic execution.” Kuznetsov et al. 11 Jun. 2012.

Engineering Challenges

Both static and dynamic state merging require the use of an external static analysis tool like Binary Ninja to find the topological ordering of program locations. Given the short duration of my winternship, I chose to implement opportunistic state merging which only merges states that happen to be at the same program location. While this approach does not give the full benefit of dynamic state merging, it is easier to implement because it does not rely on integration with an external static analysis tool to obtain topological ordering. This approach is also easily extensible to dynamic state merging since it uses many of the same primitive operations like state comparison and state merging.

Implementation

I implemented opportunistic state merging for Manticore. The implementation checks if two states at the same program location have semantically equivalent input, output socket buffers, memory, and system call traces in an “isMergeable” predicate. If this predicate is satisfied, the implementation merges CPU register values that are semantically inequivalent.

Results

I used a simple example where I could see two states saved in Manticore’s queue that are at the same program location making them good candidates to be merged. I present the partial CFG of this example program below.

Merged CFG Annotated

Merged CFG Annotated

The two basic blocks highlighted in red cause control flow to merge at the basic block highlighted in green. The first highlighted red block causes control flow to jump directly to the green block. The second highlighted red block moves a constant (0x4a12dd) to the edi register and then jumps to the green block. To explore this example, Manticore creates two states, one which explores the first red block and jumps to the green block, and another state that explores the second red block and jumps to the green block. Since the only difference between these two states which are at the same program location (the green block) is the value present in their edi register, Manticore can merge these two states into a single state with the value for edi set to be an if-then-else expression. This if-then-else expression will use the condition that chooses which side of the branch (jbe 0x40060d) gets taken. If the condition is satisfied, the if-then-else expression will evaluate to the value that is present in edi in the first red block. If the condition is not satisfied, it will evaluate to 0x4a12dd (the constant set in the second red block). Thus, Manticore merges two control flow paths into one path opportunistically which eventually leads to Manticore cutting its number of execution paths by 33% on the binary compiled with the -Os optimization option with gcc and by 20% if the binary is compiled with the -O3 optimization option.

Directions for future improvement:

  1. This implementation can be extended to get the full benefits of dynamic state merging by integrating Manticore with a tool that can provide a topological ordering of program locations.
  2. State merging always creates new symbolic data since it converts all concrete writes in a region of code to symbolic writes.
  3. Check if new symbolic data introduced by state merging causes more branching later during exploration. We need to implement re-interpret heuristics such as query count estimation by Kuznetsov et al. so that we may use dynamic state merging only when it is most useful.

Path merging is a technique that needs to be re-interpreted to fit the needs of a symbolic executor. This winternship allowed me to understand the inner workings of Manticore, a state-based symbolic executor, and re-interpret path merging to better fit the use-case of binary symbolic execution with Manticore. My implementation of opportunistic state merging merges similar states if they are at the same program location. The implementation can be used in a Python script by registering a plugin called Merger with Manticore. basic_statemerging.py under examples/script is an example of such use of state merging.