Category Archives: open source

Jump-Start Your Management of Known Vulnerabilities

Organizations must manage known vulnerabilities in web applications. When it comes to application security, the Open Web Application Security Project (OWASP) Foundation Top 10 is the primary source to start reviewing and testing applications.

The OWASP Foundation list brings some important questions to mind: Which vulnerability in the OWASP Foundation Top 10 has been the root of most security breaches? Which vulnerability among the OWASP Foundation Top 10 has the highest likelihood of being exploited?

While “known vulnerable components” comes in at number nine on the list, it’s the weakness that is most often exploited, according to security firm Snyk. The OWASP Foundation stressed on its website, however, that the issue was still widespread and prevalent: “Depending on the assets you are protecting, perhaps this risk should be at the top of the list.”

So, how can these known vulnerabilities be managed?

Vulnerable Components Can Lead to Breaches

Components in this context are libraries that provide the framework and other functionality in an application. Many cyberattacks and breaches are because of vulnerable components, a trend that will likely continue, according to Infosecurity Magazine.

Recent examples include the following:

  • Panama Papers: The Panama Papers breach was one of the largest-ever breaches in terms of volume of information leaked. The root cause was an older version of Drupal, a popular web server software, as noted by Forbes.
  • Equifax: The Equifax breach was one of the most severe data breaches because of the amount of highly sensitive data it leaked, as noted by Forbes. The root cause was an older version of Apache Struts.

Often, this vulnerability is not given the attention it requires. Many organizations may not even have a proper inventory of dependent libraries. Static code analysis or vulnerability scans usually don’t report components with known vulnerabilities. In many cases, the component versions would have reached their “end of life,” but were still in use.

It’s also worth considering the complexity of managing component licenses. There are many open source licenses with varying terms and conditions. Some licenses are permissive and some are permissive with conditions (strong or weak copyleft). The Open Source Initiative (OSI) lists more than 80 approved licenses.

Most Components Are Older Versions With Known Vulnerabilities

Synopsys reported that more than 60 percent of libraries being used are older versions that have known vulnerabilities. If we take a deep look at our applications component profile, this may not be an exaggeration. Most of the web applications running today are using open source components in some way or another.

The popular open source frameworks for web applications include:

  • Struts
  • Spring MVC
  • Spring Boot
  • MyFaces
  • Hibernate in Java
  • Angular
  • Node.js in JavaScript
  • CSLA framework in .NET
  • Many PHP, Python and Ruby frameworks

There are also many object-relational mapping components, reporting tools, message broker components and a plethora of other utility components to consider. These components provide great advantages to organizations concerning cost-benefit, being future ready and helping foster digital transformation. They also have a wide developer base to develop and maintain components actively.

But are you using an older version of these components? Do they have reported vulnerabilities? Common Vulnerabilities and Exposures (CVE) in components are listed in CVE MITRE and National Vulnerability Database (NVD).

More Than 80 Types of Various Open Source Licenses

Managing open source licenses is an important activity for an organization’s open source strategy and legal and standard compliance programs. Managing licenses for components can be complex. Due care must be given to note the license version, as some may have significantly different terms and conditions from one version to another. Developers may add open source libraries to applications without giving much thought about licenses.

The perception is that open source is “free.” However, the fact is it’s “free” with conditions attached to its usage.

If we review the license clauses carefully, the requirements are more stringent when it comes to distributed software. Reviewing the license requirements of a component will also include reviewing the licenses of transitive dependencies or pedigrees — the components on which it is built. Open source compliance programs usually cover software installed on machines but may not cover the libraries used by web applications.

Automate to Identify Components With Known Vulnerability and License Risks

NVD uses Common Platform Enumeration (CPE) as the structured naming scheme for information technology (IT) systems, software and packages. The tools that automate the process get the CPE dictionary and CVE feed from NVD. The feeds are available in JSON or XML formats. The tools parse the feeds and scan through them with the CPE to provide reports.

The OWASP provides Dependency-Check, which identifies reported vulnerabilities in project dependencies. It’s easy to use and can be used in command line or integrated into the build process. It has plug-ins for popular build management tools, including Maven, Ant, Gradle and Jenkins. The build tool Maven has a “site” plug-in. By running “mvn site” command it produces the application specific report, which also shows the license information on dependencies.

There are many other commercial tools with more sophisticated functionality — beyond vulnerability identification and listing licenses. There are sources other than NVD and CVE MITRE, which provide details on known bugs, such as RubySec, Node Security Platform and many bug tracking systems.

IBM Application Security on Cloud has an Open Source Analyzer to identify component vulnerabilities. It’s recommended to integrate the tools in the build process, so the component profile is taken at the earliest stage of the development phase. This allows users to monitor the component profile during maintenance and enhancements.

Addressing Component Issues: Upgrade, Replace or Migrate

The most important step in managing open source licenses is to have a policy on acceptable licenses. The policy has to be created in consultation with your legal department. The policy should be reviewed periodically and kept up-to-date. Building an inventory of components is also important.

Once the components are identified as being vulnerable or not, and that they comply with license policy, addressing them is context specific. You can either upgrade to the latest version or replace them with alternates. This requires a risk-based approach and planning. Framework upgrades — or moving to a different framework or technology — could result in significant development efforts. The approach has to be decided based on risk and cost, considering all alternate deployment models and technologies.

Upgrading components or migrating can be rewarding. In addition to addressing security issues, it can provide an opportunity to improve the performance of the applications and address compatibility issues because of older component versions.

Component management is a continuous process, as vulnerabilities get frequently reported — even in the latest of versions. Obviously, it’s not practical to upgrade or migrate each time an issue is reported — often patches (minor versions upgrades) will be available to address the issues. Component management should be given adequate consideration and must be an integral part of an organization’s application security and compliance programs.

The post Jump-Start Your Management of Known Vulnerabilities appeared first on Security Intelligence.

5 Best GitHub Alternatives To Host Open Source Projects

Not happy with Github being acquired by Microsoft? Here are the best 5 GitHub alternatives

GitHub, the largest source-code repository in the world, has been in news lately. Thanks to Microsoft who recently announced that it would purchase the hosted Git (version control system) service GitHub Inc. for $7.5 billion in an all-stock deal.

For those who are not aware, GitHub is a popular web-based hosting service for source code and development projects that allows developers to use the tools of the privately-held company to store code, change, adapt and improve software from its public repositories for free. GitHub users have a choice of using either Git or Subversion as their VCS (Version Control System), to manage, maintain and deploy software projects. It has more than 28 million developers already collaborating on the platform and are working on more than 85 million repositories of code.

While Microsoft has assured that GitHub will continue to operate independently and will remain an open platform after the acquisition, open source developers are not hopeful and may look for an alternate place.

In this article, we provide you top 5 GitHub alternatives to host your open-source project:

  1. GitLab

GitLab is a free and open source project licensed under MIT that is very close to GitHub in use and feel. However, GitLab sacrifices the ease of use of GitHub for more privacy, security and serving speed. One if its unique features are that you can install GitLab onto your own server.

5 Best GitHub Alternatives To Host Open Source Projects- Gitlab

GitLab’s UI is clean and intuitive and also claims to handle large files and repositories better than GitHub. It supports issue tracker, group milestones, moving of issues between projects, configurable issue boards and group issues, and more. It also supports powerful branching tools and protected branches and tags, time tracking, custom notifications, issues weights, merge requests, file locking, project roadmaps, confidential and related issues, burn down charts for project and group milestones.

GitLab also allows users to have unlimited public AND private repos for free. It is being used by Stack Overflow, IBM, AT&T, Microsoft, and more. GitLab consists of three versions: Community Edition, Enterprise Edition Starter, and Enterprise Edition Premium, where each version can have different features. It is recommended to understand your needs first before selecting a certain edition.

  1. Bitbucket

Owned by Atlassian, Bitbucket is second only to GitHub in terms of popularity and usage. It is a web-based version control repository hosting service for source code and development projects. However, it also supports the Mercurial VCS as well as Git, whereas GitHub only supports Git and Subversion. It is available on Windows and Mac for free.

5 Best GitHub Alternatives To Host Open Source Projects- Bitbucket

Bitbucket offers free accounts with an unlimited number of private repositories for individuals and organizations (which can have up to five users or lesser but can be increased by selecting a paid plan). Bitbucket allows you to push files using any Git client, or the Git command line. Bitbucket can also be controlled through its web interface. It also offers amazing support for Git Large File Storage (LFS) for game development.

BitBucket integrates and communicates well with JIRA, Bamboo, and HipChat, who are a part of the Atlassian software family. It also offers features such as code reviews, Bitbucket pipelines, code search, pull requests, flexible deployment models, diff view, smart mirroring, issue tracking, IP whitelisting, an unlimited private repos, commit history and branch permissions for safeguarding your workflow. Depending on your security needs, Bitbucket deploys in the cloud, on a local server, or your company’s data center.

  1. LaunchPad

Launchpad is a free, popular platform for building, managing and collaborating on software projects from Canonical, the makers of Ubuntu Linux. It offers features such as code hosting, Ubuntu package building and hosting bug tracking, code reviews, mail listing, and specification tracking. It also supports translations, answers tracking and FAQs. Launchpad has good support for Git that allows you to host or import Git repositories on Launchpad for free.

5 Best GitHub Alternatives To Host Open Source Projects- Launchpad

Some of the popular projects hosted on Launchpad include Ubuntu Linux, MySQL, OpenStack, Terminator and more.

  1. SourceForge

SourceForge is a web-based service that offers software developers a centralized online location to control and manage free and open-source software projects. It was the first to offer this service for free to open-source projects.

5 Best GitHub Alternatives To Host Open Source Projects- sourceforge

SourceForge provides a source code repository, bug tracking, mirroring of downloads for load balancing, a wiki for documentation, developer and user mailing lists, user-support forums, user-written reviews and ratings, a news bulletin, micro-blog for publishing project updates, and other features. SourceForge hosts lots of open-source Linux, Windows, Mac, Apache OpenOffice, FileZilla projects, and lots more.

SourceForge servers support for PHP, Perl, Python, Tcl, Ruby, and shell scripts. You can upload to Sourceforge through an SFTP client. It offers you the option of using Git, Subversion (SVN) and Mercurial (Hg) as your project’s VCS on SourceForge.

  1. GitBucket

GitBucket is an open source, highly pluggable Git platform that runs on JVM (Java Virtual Machine). It comes with features such as Public / Private Git repositories (with http/https and ssh access), GitLFS support, a repository viewer, issues, pull requests and wiki for repositories, activity timeline and email notifications, account and group management with LDAP integration, and a plug-in system to extend its core features.

5 Best GitHub Alternatives To Host Open Source Projects- GitBucket

The post 5 Best GitHub Alternatives To Host Open Source Projects appeared first on TechWorm.

Open Source Tools for Active Defense Security

A security truism: There will always be things we want to do, but we just can’t get the budget for them. Some strategies can be harder in this regard than others, including those that are technically sophisticated, less mainstream or more complicated to operationalize.

So, what can we do when this happens? One option is to employ free or open source tools in limited deployment. This choice can help demonstrate value and work as a proof point for future budget conversations — and can even work for a strategy like active defense.

If your goal is active defense, it may be more challenging to gain budgetary resources and executive support. This barrier makes open source options particularly useful because these tools can help you demonstrate value and shore up support.

What Is Active Defense?

This term originates from the defense world and refers to techniques that deny positions or strategic resources to adversaries, which then makes their campaigns more challenging to conduct. In the cybersecurity world, the goal is the same: Make the adversaries’ campaigns harder by denying them key resources. Active defense isn’t quite as mainstream as other security tools, such as anti-malware scanning tools, because much of the guidance (e.g., taxonomies of controls, regulatory mandates, etc.) doesn’t require them.

One reason active defense is particularly useful in a cybersecurity context is the time-constrained nature of an attacker’s campaign. There are multiple windows of time within these campaigns, including the time between discovery of a security vulnerability and when you fix it; when the attacker gains entry and when you discover he or she is there; and when the attacker is found and when he or she is caught, blocked or otherwise disabled.

Anything you can do to slow down attackers (or increase the amount of time and effort they need to invest) makes it more likely that you’ll discover them before they’re successful.

Active Defense Strategies and Open Source Tools

There are ways to increase the time and energy required on the part of the attacker: You can feed the attacker bad intelligence or false intel. You can waste his or her reconnaissance resources. You can trick the adversary into revealing his or her identity (which you or law enforcement can use later). Here are some of the most effective active defense strategies, as well as some free or open source tools you can test to see if this approach is compatible with your environment.

Active Defense Strategy 1: Decoys

Decoys can be used to distract attackers from real targets. Most security practitioners are familiar with honeypots or honeynets, which are sets of devices that look like “juicy targets” from an attacker’s point of view — but are actually just traps designed to tip you off. There are lots of great open source options that can do this, but your selection will ultimately depend on your environment and what you want to get out of it.

OpenCanary is a straightforward tool, both conceptually and in implementation: You set a profile (i.e., a personality) for what you want it to look like (e.g., a Linux or Windows server, a database server, etc.), and it sends alerts when someone connects to it. You can select from low-, medium- or high-interaction honeypots. These levels refer to how much interaction the honeypot can maintain with an attacker before he or she realizes it’s a decoy.

Depending on what type of device you want to simulate, there are many choices for each. WebTrap lets you simulate an internal web resource (e.g., an intranet or other page). Low- or medium-interaction tools like HoneyPy can be used to listen for requests and alert you when someone connects. High-interaction tools like Lyrebird can hold an attacker’s attention for a period — so you can figure out who the attacker is, observe his or her behavior or waste his or her time.

Active Defense Strategy 2: Attribution

The second strategy involves tools that are designed to trick an attacker into divulging his or her identity or location — or any other information that can be used to assist in law enforcement and other mitigation activities. It’s important to note that attribution sometimes requires interaction with the attacker. Therefore, it needs to be done carefully to ensure you’re not breaking the very same laws as your attacker.

Many of the tools we will explore moving forward can be used in both lawful and unlawful ways. The fine line between the two will depend on your usage and context. Therefore, you must ensure that planned usage is in accordance with the law. How can you tell? A solid strategy is to run the specifics past your legal team to get clarity and confirmation.

One tool that can assist with attribution is honeypot systems that have built-in attribution capability features. A tool like HoneyBadger, which has geolocation features to determine where an attacker is located, is a good example. This is particularly useful in combination with documents that are a beacon when opened, such as when integrated with a tool like Molehunt.

Alternatively, a tool like the Browser Exploitation Framework (BeEF) can assist in data collection within an attacker’s browser by gathering information from the remote party from within his or her web browser. BeEF provides quite a bit of functionality and is often used in penetration testing to gain a beachhead for internal attacks. (Carefully review the caveat about lawful versus unlawful usage here.)

Active Defense Strategy 3: Sinks and Traps

You can trap attacker activity to get them to waste time instead of realizing his or her campaign’s objectives. Tools like Spidertrap or Weblabyrinth generate “mazes” of bogus web content to waste time when an attacker crawls it or uses automated tools to scan it.

Meanwhile, tools like Nova can create a “haystack” of hosts (or, in fact, entire networks) that appear to be part of your environment from an attacker’s point of view. An attacker can get lost within this, potentially spending hours or days attempting to sort the “wheat” (your actual production network) from the “chaff” (the bogus virtual hosts).

The bottom line: Open source options can be a great way to experiment with some of these approaches if you’re not already doing them. Fortunately, there are a number of excellent options to choose from to help you get started.

Listen to the podcast series: Take Back Control of Your Cybersecurity now

The post Open Source Tools for Active Defense Security appeared first on Security Intelligence.

The percentage of open source code in proprietary apps is rising

The number of open source components in the codebase of proprietary applications keeps rising and with it the risk of those apps being compromised by attackers leveraging vulnerabilities in them, a recent report has shown. Compiled after examining the findings from the anonymized data of over 1,100 commercial codebases audited in 2017 by the Black Duck On-Demand audit services group, the report revealed that: 96 percent of the scanned applications contain open source components, with … More

The post The percentage of open source code in proprietary apps is rising appeared first on Help Net Security.

The Cherry on Top: Add Value to Existing Risk Management Activities With Open Source Tools

Telling people about the virtues of open source security tools is like selling people on ice cream sundaes: It doesn’t take much of a sales pitch — and most people are convinced before you start.

It’s probably not surprising that most security professionals are already using open source solutions to put a cherry on top of their existing security infrastructure. From Wireshark to OpenVAS and Kali Linux, open source software is a key component in many security practitioners’ arsenal.

But despite the popularity of open source tools for technical tasks, practitioners often view risk management and compliance initiatives as outside the purview of open source. There are a few reasons for this. Open source projects that directly support these efforts are harder to come by, and there’s often less urgency to implement them compared to technical solutions that directly address security issues, for example. Although open source solutions aren’t always top of mind when it comes to these broader efforts, they can help IT teams maximize the value of their risk management frameworks and boost the organization’s overall security posture.

5 Ways to Supplement Your Risk Strategy Using Open Source Software

Two years ago, we compiled a list of free and open source tools to help organizations build out a systemic risk management strategy. Since much has changed in the cybersecurity world since then, let’s take a look at some additional tools that can help you cover more ground, drive efficiency and add value to your existing risk management strategy.

1. Threats and Situational Awareness

All security practitioners know risk is a function of the likelihood of a security incident and its potential impact should it come to pass. To understand these variables, it’s crucial to examine the threat environment, including what tools attackers use, their motivations and methods of operation, through formalized threat modeling.

When it comes to application security, threat modeling enables practitioners to unpack software and examine it from an attacker’s point of view. Tools like OWASP’s Threat Dragon and HTML5-based SeaSponge help security teams visually model app-level threats. If used creatively, they can be extended to model attack scenarios that apply to data and business processes as well. Security teams can also incorporate threat modeling directly into governance, risk management and compliance (GRC) processes to inform assessment and mitigation strategies.

2. Workflow Automation

Logistical and organizational considerations can have a significant impact on risk management. The process has a defined order of operations that might span long periods of time, and it takes discipline to see it through. For example, risk assessment should occur before risk treatment, and treatment should be completed before monitoring efforts begin.

It’s also important to account of interdependencies. It might be more effective to assess a RESTful front-end application before the back-end system it interfaces with, for instance. It’s all about timing: If you assess the risk associated with an application or business process today, what will happen a year from now when the business process has evolved? What if the underlying technology shifts or the business starts serving new customers? What about five years from now?

Process automation tools, such as ProcessMaker and Bonita, can help security teams support both of these aspects of the risk management process. These are not necessarily security solutions, but tools designed to build and automate workflows. In a security context, they can help analysts automate everything from policy approval to patch management. For risk management specifically, these tools help security teams ensure processes are followed correctly, and risks are reassessed after they’ve been completed.

3. Automated Validation

The process of implementing a risk mitigation strategy has two steps: The first is to select a countermeasure or control to address a certain risk. The second is to validate the effectiveness of that countermeasure. It can be extremely time-intensive to execute the second part of the process consistently.

The Security Content Automation Protocol (SCAP) can help security leaders ensure the validation step is performed consistently and completely. Introduced in National Institute of Standards and Technology (NIST) Special Publication 800-126, SCAP enables analysts to define vendor-agnostic security profiles for devices and automatically validate their configuration.

One benefit to using SCAP for validation is the degree of support in the security marketplace. Most vulnerability assessment tools natively support it, as do a number of endpoint management products. By employing tools, such as those available from the OpenSCAP project, security teams can derive value today and in the future as security toolsets evolve.

4. Documentation and Recordkeeping

Risk decisions made today are much more defensible in hindsight when there’s documentation to support them. Let’s say, for example, that you’ve evaluated a countermeasure and decided that the cost to implement it outweighs the risk — or that your organization has reviewed a risk and decided to accept it. Should the unthinkable happen (e.g., a data breach), it’s much easier to justify your decisions when there’s documentation supporting your analysis and the conclusions you’ve drawn. While any record-keeping tool can help you do this, a specialized solution, such as the community version of GRC Envelop, can add value because it was developed with risk activities in mind.

5. Metrics and Reporting

Finally, open source tools can support ongoing metrics gathering and risk reporting. There are numerous aspects of a risk program that are worth measuring, such as near-miss information (i.e., attacks that were stopped before causing damage), log data, mitigated incidents, risk assessment results, automated vulnerability scanning data and more.

Tools like Graphite are specifically and purposefully designed to help security professionals store time-series data — numeric values that change with time. Collecting and storing the data enables analysts to report on the risk associated with those assets. The more frequently they collect it, the closer they can get to producing a continuous view of the organization’s risk profile.

The Cherry on Top of Your Risk Management Strategy

As we’ve shown, there are quite a few open source alternatives out there that can add value to the risk management activities you may already be performing. By choosing the right tools to supplement your strategy, you can drive efficiency with your risk efforts, realize more valuable outcomes and improve your organization’s overall risk posture today and in the future.

Listen to the podcast series: Take Back Control of Your Cybersecurity now

The post The Cherry on Top: Add Value to Existing Risk Management Activities With Open Source Tools appeared first on Security Intelligence.

Advancing the future of society with AI and the intelligent edge

The world is a computer, filled with an incredible amount of data. By 2020, the average person will generate 1.5GB of data a day, a smart home 50GB and a smart city, a whopping 250 petabytes of data per day. This data presents an enormous opportunity for developers — giving them a seat of power, while also giving them tremendous responsibility. That’s why this morning at Build, we don’t take our jobs lightly in helping to equip these developers with the tools and guidance to change the world. On stage in Seattle, Microsoft CEO Satya Nadella is describing this new world view, fueled by AI that can power better health care, relieve challenges around basic human needs and create a society that’s more inclusive and accessible.

Helping create a better, safer, more just world is a responsibility we take seriously at Microsoft. We’ve always been committed to the ethical creation and use of technology. As AI increasingly becomes part of our lives, Microsoft’s commitment to advancing human good has never been stronger. Today, we’re announcing AI for Accessibility, a new $25 million, five-year program aimed at harnessing the power of AI to amplify human capability for the more than one billion people around the world with disabilities. AI for Accessibility is a call to action for developers, NGOs, academics, researchers and inventors to accelerate their work for people with disabilities, focusing on three areas: employment, human connection and modern life. It includes grants, technology and AI expertise to accelerate the development of accessible and intelligent AI solutions and builds on recent advancements in Azure Cognitive Services to help developers create intelligent apps that can empower people with hearing, vision and other disabilities. Real-time speech-to-text transcription, visual recognition services and predictive text functionality that suggests words as people type are just a few examples. We’ve seen this impact through the launch of Seeing AI and alt-text which empowers people who are blind or low vision; as well as Helpicto, which helps people with autism.

If AI is the heart of how we can advance society, the intelligent cloud and the intelligent edge are the backbone. In the next 10 years, billions of everyday devices will be connected — smart devices that can see, listen, reason, predict and more, without a 24/7 dependence on the cloud. This is the intelligent edge, and it is the interface between the computer and the real world. The edge takes AI and cloud together to collect and make sense of new information, especially in scenarios that are too dangerous for humans or require new approaches to solve, whether they be on the factory floor or in the operating room.

Today we’re giving developers the tools and guidance to build these possibilities. For example, we’re making it easier to build apps at the edge by open sourcing the Azure IoT Edge Runtime, allowing customers to modify the runtime and customize applications at the edge. We’re giving developers Custom Vision — the first Azure Cognitive Service available for the edge — to build applications that use powerful AI algorithms that interpret, listen, speak and see for edge devices. And we are partnering across both DJI and Qualcomm. Microsoft and DJI, the world’s largest drone company, will collaborate to develop commercial drone solutions so that developers in key vertical segments such as agriculture, construction and public safety can build life-changing solutions, like applications that can help farmers produce more crops. With Qualcomm Technologies Inc., we announced a joint effort to create a vision AI dev kit running Azure IoT Edge, for camera-based IoT solutions. The camera can power advanced Azure services like machine learning and cognitive services that can be downloaded from Azure and run locally on the edge. Other advancements include a preview of Project Brainwave, an architecture for deep neural net processing, that is now available on Azure and on the edge. Project Brainwave makes Azure the fastest cloud to run real-time AI today.

We are also releasing new Azure Cognitive Services updates such as a unified Speech service that makes it easier for developers to add speech recognition, text-to-speech, customized voice models and translation to their applications. In addition, we’re making Azure the best place to develop conversational AI experiences integrated with any agent. New updates to Bot Framework, combined with our new Cognitive Services updates, will power the next generation of conversational bots, enabling richer dialogs and full personality and voice customization to match a company’s brand identity.

It was eight years ago when we shipped Kinect, which was the first AI device with speech, gaze and vision. We then took that technology forward with Microsoft HoloLens. We’ve seen developers build transformative solutions across a multitude of industries, from security to manufacturing to health care and more. As sensor technology has evolved, we see incredible possibilities for combining these sensors with the power of Azure AI services such as machine learning, Cognitive Services and IoT Edge.

Today we are excited to announce a new initiative, Project Kinect for Azure — a package of sensors from Microsoft that contains our unmatched time-of-flight depth camera, with onboard compute, in a small, power-efficient form factor — designed for AI on the edge. Project Kinect for Azure brings together this leading hardware technology with Azure AI to empower developers with new scenarios for working with ambient intelligence.

Similarly, our Speech Devices software development kit announced today delivers superior audio processing from multi-channel sources for more accurate speech recognition, including noise cancellation, far-field voice and more. With this SDK, developers can build for a variety of voice-enabled scenarios like drive-thru ordering systems, in-car or in-home assistants, smart speakers and other digital assistants.

This new age of technology is also fueled by mixed reality, which is opening up new possibilities in the workplace. Today we announced two new apps that will help empower firstline workers, the first workers to interface with customers and triage problems: Microsoft Remote Assist and Microsoft Layout. Microsoft Remote Assist enables remote collaboration via hands-free video calling, letting firstline workers share what they see with any expert on Microsoft Teams, while staying hands on to solve problems and complete tasks together. In a similar vein, Microsoft Layout lets workers design spaces in context with mixed reality, using 3D models for creating room layouts with holograms.

Whether creating a more inclusive and accessible world, solving problems that plague humanity or helping improve the way we work and live, developers are playing a leading role. As new ideas and solutions with AI and intelligent edge emerge, Microsoft will continue to advocate for developers and give them the tools and cloud services that make it possible to build these new solutions to solve real problems. From the top down, we are a developer-led company that continues to invest in coders and give them free rein to solve problems.

Learn more about how we’re empowering developers to build for this future today using Azure and M365, via blog posts from Executive Vice President of Cloud + AI Scott Guthrie and Corporate Vice President of Windows Joe Belfiore.


The post Advancing the future of society with AI and the intelligent edge appeared first on The Official Microsoft Blog.

Ep. 103 – How To Be A Good Parent With Michael Bazzell

There are a few guests that we have had on multiple times and yet continue to excite and entertain us. This month we invite back the amazing Michael Bazzell to discuss OSINT, Security and parenting tips. March 12, 2018



Ep. 103 – How To Be A Good Parent With Michael Bazzell

Miro Video Player

Get Involved

Got a great idea for an upcoming podcast? Send us a quick message on the contact form!

Enjoy the Outtro Music? Thanks to Clutch for allowing us to use Son of Virginia as our new SEPodcast Theme Music

And check out a schedule for all our training at Social-Engineer.Com

Check out the Innocent Lives Foundation to help unmask online child predators.

The post Ep. 103 – How To Be A Good Parent With Michael Bazzell appeared first on Security Through Education.

FLARE VM: The Windows Malware Analysis Distribution You’ve Always Needed!

UPDATE (April 26, 2018): The web installer method to deploy FLARE VM is now deprecated. Please refer to the README on the FLARE VM GitHub for the most up-to-date installation instructions.

As a reverse engineer on the FLARE Team I rely on a customized Virtual Machine (VM) to perform malware analysis. The Virtual Machine is a Windows installation with numerous tweaks and tools to aid my analysis. Unfortunately trying to maintain a custom VM like this is very laborious: tools frequently get out of date and it is hard to change or add new things. There is also a constant fear that if the VM gets corrupted it would be super tedious to replicate all of the settings and tools that I’ve built up over the years. To address this and many related challenges, I have developed a standardized (but easily customizable) Windows-based security distribution called FLARE VM.

FLARE VM is a freely available and open sourced Windows-based security distribution designed for reverse engineers, malware analysts, incident responders, forensicators, and penetration testers. Inspired by open-source Linux-based security distributions like Kali Linux, REMnux and others, FLARE VM delivers a fully configured platform with a comprehensive collection of Windows security tools such as debuggers, disassemblers, decompilers, static and dynamic analysis utilities, network analysis and manipulation, web assessment, exploitation, vulnerability assessment applications, and many others.

The distribution also includes the FLARE team’s public malware analysis tools such as FLOSS and FakeNet-NG.

How To Get It

You are expected to have an existing installation of Windows 7 or above. This allows you to choose the exact Windows version, patch level, architecture and virtualization environment yourself.

Once you have that available, you can quickly deploy the FLARE VM environment by visiting the following URL in Internet Explorer (other browsers are not going to work):

After you navigate to the above URL in the Internet Explorer, you will be presented with a Boxstarter WebLauncher dialog. Select Run to continue the installation as illustrated in Figure 1.

Figure 1: FLARE VM Installation

Following successful installation of Boxstarter WebLauncher, you will be presented with a console window and one more prompt to enter your Windows password as shown in Figure 2. Your Windows password is necessary to restart the machine several times during the installation without prompting you to login every time.

Figure 2: Boxstarter Password Prompt

The rest of the process is fully automated, so prepare yourself a cup of coffee or tea. Depending on your connection speed, the initial installation takes about 30-40 minutes. Your machine will also reboot several times due to the numerous software installation’s requirements. During the deployment process, you will see installation logs of a number of packages.

Once the installation is complete, it is highly recommended to switch the Virtual Machine networking settings to Host-Only mode so that malware samples would not accidentally connect to the Internet or local network. Also, take a fresh virtual machine snapshot so this clean state is saved! The final FLARE VM installation should look like Figure 3.

Figure 3: FLARE VM installation

NOTE: If you encounter a large number of error messages, try to simply restart the installation. All of the existing packages will be preserved and new packages will be installed.

Getting Started

The VM configuration and the included tools were either developed or carefully selected by the members of the FLARE team who have been reverse engineering malware, analyzing exploits and vulnerabilities, and teaching malware analysis classes for over a decade. All of the tools are organized in the directory structure shown in Figure 4.

Figure 4: FLARE VM Tools

While we attempt to make the tools available as a shortcut in the FLARE folder, there are several available from command-line only. Please see the online documentation at for the most up to date list.

Sample Analysis

In order to best illustrate how FLARE VM can assist in malware analysis tasks let’s perform a basic analysis on one of the samples we use in our Malware Analysis Crash Course.

First, let’s obtain some basic indicators by looking at the strings in the binary. For this exercise, we are going to run FLARE’s own FLOSS tool, which is a strings utility on steroids. Visit for additional information about the tool. You can launch it by clicking on the FLOSS icon in the taskbar and running it against the sample as illustrated in Figure 5.

Figure 5: Running FLOSS

Unfortunately, looking over the resulting strings in Figure 6 only one string really stands out and it is not clear how it is used.

Figure 6: Strings Analysis

Let’s dig a bit more into the binary by opening up CFF Explorer in order to analyze sample’s imports, resources, and PE header structure. CFF Explorer and a number of other utilities are available in the FLARE folder that can be accessed from the Desktop or the Start menu as illustrated in Figure 7.

Figure 7: Opening Utilities

While analyzing the PE header, there were several indicators that the binary contains a resource object with an additional payload. For example, the Import Address Table contained relevant Windows API calls such as LoadResource, FindResource and finally WinExec. Unfortunately, as you can see in Figure 8 the embedded payload “BIN” contains junk so it is likely encrypted.

Figure 8: PE Resource

At this point, we could continue the static analysis or we could “cheat” a bit by switching over to basic dynamic analysis techniques. Let’s attempt to quickly gather basic indicators by using another FLARE tool called FakeNet-NG. FakeNet-NG is a dynamic network emulation tool which tricks malware into revealing its network functionality by presenting it with fake services such as DNS, HTTP, FTP, IRC and many others. Please visit for additional information about the tool.

Also, let’s launch Procmon from Sysinternals Suite in order to monitor all of the File, Registry and Windows API activity as well. You can find both of these frequently used tools in the taskbar illustrated in Figure 9.

Figure 9: Dynamic Analysis

After executing the sample with Administrator privileges, we quickly find excellent network- and host–based indicators. Figure 10 shows FakeNet-NG responding to malware’s attempt to communicate with using HTTP protocol. Here we capture useful indicators such as a complete HTTP header, URL and a potentially unique User-Agent string. Also, notice that FakeNet-NG is capable of identifying the exact process communicating which is level1_payload.exe. This process name corresponds to the unique string that we have identified in the static analysis, but couldn’t understand how it was used.

Figure 10: FakeNet-NG

Comparing our findings with the output of Procmon in Figure 11, we can confirm that the malware is indeed responsible for creating level1_payload.exe executable in the system32 folder.

Figure 11: Procmon

As part of the malware analysis process, we could continue digging deeper by loading the sample in a disassembler and performing further analysis inside a debugger. However, I would not want to spoil this fun for our Malware Analysis Crash Course students by sharing all the answers here. That said all of the relevant tools to perform such analysis are already included in the distribution such as IDA Pro and Binary Ninja disassemblers, a nice collection of debuggers and several plugins, and many others to make your reverse engineering tasks as convenient as possible.

Have It Your Way

FLARE VM is a constantly growing and changing project. While we try to cover as many use-case scenarios as possible it is simply impossible due to the nature of the project. Luckily, FLARE VM is extremely easy to customize because it was built on top of the Chocolatey project. Chocolatey is a Windows-based package management system with thousands of packages. You can find the list here: In addition to the public Chocolatey repository, FLARE VM uses our own FLARE repository which constantly growing and currently contains about 40 packages.

What all this means is that if you want to quickly add some package, let’s say Firefox, you no longer have to navigate to the software developer’s website. Simply open up a console and type in the command in Figure 12 to automatically download and install any package:

Figure 12: Installing packages

In a few short moments, Firefox icon is going to appear on your Desktop with no user interaction necessary.

Staying up to date

As I’ve mentioned in the beginning, one of the hardest challenges of unmanaged Virtual Machine is trying to keep all the tools up to date. FLARE VM solves this problem. You can completely update the entire system by simply running the command in Figure 13.

Figure 13: Staying up to date

If any of the installed packages have newer versions, they will be automatically downloaded and installed.

NOTE: Don’t forget to take another clean snapshot of an updated system and set networking back to Host-Only.


I hope you enjoy this new free tool and will adopt it as another trusted resource to perform reverse engineering and malware analysis tasks. Next time you need to set up a new malware analysis environment, try out FLARE VM!

In these few pages, we could only scratch the surface of everything that FLARE VM is capable of; however, feel free to leave your comments, tool requests, and bugs on our Github issues page here: or

Five Reasons I Want China Running Its Own Software

Periodically I read about efforts by China, or Russia, or North Korea, or other countries to replace American software with indigenous or semi-indigenous alternatives. I then reply via Twitter that I love the idea, with a short reason why. This post will list the top five reasons why I want China and other likely targets of American foreign intelligence collection to run their own software.

1. Many (most?) non-US software companies write lousy code. The US is by no means perfect, but our developers and processes generally appear to be superior to foreign indigenous efforts. Cisco vs Huawei is a good example. Cisco has plenty of problems, but it has processes in place to manage them, plus secure code development practices. Lousy indigenous code means it is easier for American intelligence agencies to penetrate foreign targets. (An example of a foreign country that excels in writing code is Israel, but thankfully it is not the same sort of priority target like China, Russia, or North Korea.)

2. Many (most?) non-US enterprises are 5-10 years behind US security practices. Even if a foreign target runs decent native code, the IT processes maintaining that code are lagging compared to American counterparts. Again, the US has not solved this problem by any stretch of the imagination. However, relatively speaking, American inventory management, patch management, and security operations have the edge over foreign intelligence targets. Because non-US enterprises running indigenous code will not necessarily be able to benefit from American expertise (as they might if they were running American code), these deficiencies will make them easier targets for foreign exploitation.

3. Foreign targets running foreign code is win-win for American intel and enterprises. The current vulnerability equities process (VEP) puts American intelligence agencies in a quandary. The IC develops a zero-day exploit for a vulnerability, say for use against Cisco routers. American and Chinese organizations use Cisco routers. Should the IC sit on the vulnerability in order to maintain access to foreign targets, or should it release the vulnerability to Cisco to enable patching and thereby protect American and foreign systems?

This dilemma disappears in a world where foreign targets run indigenous software. If the IC identifies a vulnerability in Cisco software, and the majority of its targets run non-Cisco software, then the IC is more likely (or should be pushed to be more likely) to assist with patching the vulnerable software. Meanwhile, the IC continues to exploit Huawei or other products at its leisure.

4. Writing and running indigenous code is the fastest way to improve. When foreign countries essentially outsource their IT to vendors, they become program managers. They lose or never develop any ability to write and run quality software. Writing and running your own code will enroll foreign organizations in the security school of hard knocks. American intel will have a field day for 3-5 years against these targets, as they flail around in a perpetual state of compromise. However, if they devote the proper native resources and attention, they will learn from their mistakes. They will write and run better software. Now, this means they will become harder targets for American intel, but American intel will retain the advantage of point 3.

5. Trustworthy indigenous code will promote international stability. Countries like China feel especially vulnerable to American exploitation. They have every reason to be scared. They run code written by other organizations. They don't patch it or manage it well. Their security operations stink. The American intel community could initiate a complete moratorium on hacking China, and the Chinese would still be ravaged by other countries or criminal hackers, all the while likely blaming American intel. They would not be able to assess the situation. This makes for a very unstable situation.

Therefore, countries like China and others are going down the indigenous software path. They understand that software, not oil as Daniel Yergen once wrote, is now the "commanding heights" of the economy. Pursuing this course will subject these countries to many years of pain. However, in the end I believe it will yield a more stable situation. These countries should begin to perceive that they are less vulnerable. They will experience their own vulnerability equity process. They will be more aware and less paranoid.

In this respect, indigenous software is a win for global politics. The losers, of course, are global software companies. Foreign countries will continue to make short-term deals to suck intellectual property and expertise from American software companies, before discarding them on the side of Al Gore's information highway.

One final point -- a way foreign companies could jump-start their indigenous efforts would be to leverage open source software. I doubt they would necessarily honor licenses which require sharing improvements with the open source community. However, open source would give foreign organizations the visibility they need and access to expertise that they lack. Microsoft's shared source and similar programs were a step in this direction, but I suggest foreign organizations adopt open source instead.

Now, widespread open source adoption by foreign intelligence targets would erode the advantages for American intel that I explained in point 3. I'm betting that foreign leaders are likely similar to Americans in that they tend to not trust open source, and prefer to roll their own and hold vendors accountable. Therefore I'm not that worried, from an American intel perspective, about point 3 being vastly eroded by widespread foreign open source adoption.

TeePublic is running a sale until midnight ET Thursday! Get a TaoSecurity Milnet T-shirt for yourself and a friend!

Analyzing the Malware Analysts – Inside FireEye’s FLARE Team

At the Black Hat USA 2016 conference in Las Vegas last week, I was fortunate to sit down with Michael Sikorski, Director, FireEye Labs Advanced Reverse Engineering (FLARE) Team.

During our conversation we discussed the origin of the FLARE team, what it takes to analyze malware, Michael’s book “Practical Malware Analysis: The Hands-On Guide to Dissecting Malicious Software,” and the latest open source freeware tools FLOSS and FakeNet-NG.

Listen to the full podcast here.