For years, the United Arab Emirates (UAE) has committed itself to adopting information technology (IT) and electronic communication. The UAE’s Telecommunications Regulatory Authority (TRA) noted that this policy has made the state’s government agencies and organizations more efficient as well as has improved the ability for individuals to collaborate around the world. As such, the […]… Read More
The post UAE’s Information Assurance Regulation – How to Achieve Compliance appeared first on The State of Security.
A shift in 2020 to employees working from home has changed how companies operate. In particular, the rush to work-from-home has given rise to elevated security threats. While companies are now spending significant amounts on cybersecurity, threats are still outpacing corporate outlay. As tech leaders have weighed their options on how to harden their security…
The post Why a unified view of threats strengthens your cybersecurity posture first appeared on IT World Canada.
This is a current list of where and when I am scheduled to speak:
- I’ll be speaking at Cyber Week Online, October 19-21, 2020.
- I’ll be speaking at the IEEE Symposium on Technology and Society virtual conference, November 12-15, 2020.
- I’ll be keynoting the 2020 Conference on Cyber Norms on November 12, 2020.
- I’m speaking at the (ISC)² Security Congress 2020, November 16, 2020.
- I’ll be on a panel at the OECD Global Blockchain Policy Forum 2020 on November 17, 2020.
- I’ll be keynoting the HITB CyberWeek Virtual Edition on November 18, 2020.
- I’ll be speaking at an Informa event on February 28, 2021. Details to come.
The list is maintained on this page.
It’s hard to keep pace with all the changes happening in the world of cybersecurity. Security experts and leaders must continue learning (and unlearning) to stay ahead of the ever-evolving threat landscape. In fact, many of us are in this field because of our desire to continuously challenge ourselves and serve the greater good.
So many of the advancements in security are now utilizing this amorphous, at times controversial, and complex term called “artificial intelligence” (AI). Neural networks, clustering, fuzzy logic, heuristics, deep learning, random forests, adversarial machine learning (ML), unsupervised learning. These are just a few of the concepts that are being actively researched and utilized in security today.
But what do these techniques do? How do they work? What are the benefits? As security professionals, we know you have these questions, and so we decided to create Security Unlocked, a new podcast launching today, to help unlock (we promise not to overuse this pun) insights into these new technologies and the people creating them.
In each episode, hosts Nic Fillingham and Natalia Godyla take a closer look at the latest in threat intelligence, security research, and data science. Our expert guests share insights into how modern security technologies are being built, how threats are evolving, and how machine learning and artificial intelligence are being used to secure the world.
Each episode will also feature an interview with one of the many experts working in Microsoft Security. Guests will share their unique path to Microsoft and the infosec field, what they love about their calling and their predictions about the future of ML and AI.
New episodes of Security Unlocked will be released twice a month with the first three episodes available today on all major podcast platforms. We will talk about specific topics in future blogs and provide links to podcasts to get more in-depth.
Episode 1: Going ‘deep’ to identify attacks, and Holly Stewart
Guests: Arie Agranonik and Holly Stewart
In this episode, Nic and Natalia invited Arie Agranonik, Senior Data Scientist at Microsoft, to better understand how we’re using deep learning models to look at behavioral signals and identify malicious process trees. In their chat, Arie explains the differences and use cases for techniques such as deep learning, neural networks, and transfer learning.
Nic and Natalia also speak with Holly Stewart, Principal Research Manager at Microsoft, to learn how, and when, to use machine learning, best practices for building an awesome security research team, and the power of diversity in security.
Episode 2: Unmasking threats with AMSI and ML, and Dr. Josh Neil
Guests: Ankit Garg, Geoff McDonald, and Dr. Josh Neil
In this episode, members of the Microsoft Defender ATP Research team chat about how the antimalware scripting interface (AMSI) and machine learning are stopping active directory attacks.
They’re also joined by Josh Neil, Principal Data Science Manager at Microsoft, as he discusses his path from music to mathematics, one definition of “artificial intelligence,” and the importance of combining multiple weak signals to gain a comprehensive view of an attack.
Episode 3: Behavior-based protection for the under-secured, and Dr. Karen Lavi
Guests: Hardik Suri and Dr. Karen Lavi
Blog referenced: Defending Exchange servers under attack
In this episode, Nic and Natalia chat with Hardik Suri on the importance of keeping servers up-to-date and how behavior-based monitoring is helping protect under-secured Exchange servers.
Dr. Karen Lavi, Senior Data Scientist Lead at Microsoft, joins the discussion to talk about commonalities between neuroscience and cybersecurity, her unique path to Microsoft (Teaser: She started in the Israeli Defense Force and later got her PhD in neuroscience), and her predictions on the future of AI.
Please join us monthly on the Microsoft Security Blog for new episodes. If you have feedback on how we can improve the podcast or suggestions for topics to cover in future episodes, please email us at email@example.com, or talk to us on our @MSFTSecurity Twitter handle.
And don’t forget to subscribe to Security Unlocked.
Some footage has already appeared on adult sites, with cybercriminals offering lifetime access to the entire loot for US$150
The post 50,000 home cameras reportedly hacked, footage posted online appeared first on WeLiveSecurity
I can honestly say that the two discussions featured in the latest episode of the Security Stories podcast have inspired and motivated me more than anything else has recently.
I really hope that as many people as possible get to listen to this episode. And I’m definitely not just saying that for my podcast stats
Diversity in cybersecurity discussion
Firstly, I caught up with my co-host Noureen Njoroge, as well as Leticia Gamill, Cisco’s Channel leader for Canada and Latin America, and Matt Watchinski, Vice President of Cisco Talos.
Together, we discuss a crucial topic in cybersecurity: the significance of diverse representation, and what that can do for the industry.
Leticia oversees team members based across seven countries, and is a passionate supporter of diversity in cybersecurity. Last year she created a non-profit called LATAM Women in Cybersecurity to encourage more women in Florida and Latin America to enter the field.
As the leader of Talos, the largest commercial threat intelligence group in the world, Matt oversees all the intelligence activities necessary to support our security products and services that keep customers safe.
Matt is a huge ally for diversity in cybersecurity. Within Talos, he has created a culture and a hiring policy that ensures voices from multiple backgrounds can be heard.
And of course most regular Security Stories listeners already know my co-host Noureen, but just in case this is your first time listening, Noureen is a threat intelligence customer engineer. She’s the founder of Cisco’s global cybersecurity mentoring forum, running mentoring events twice a month.
She’s also the founder of the Mentors and Mentees women in Cybersecurity group on LinkedIn and the president of North Carolina Women in Cybersecurity (WiCyS) Affiliate chapter.
Noureen is listed among the Top 30 Most Admired Minority Professionals in Cybersecurity by SeQure World Magazine, and was recently crowned winner of the Cybersecurity Woman of the Year 2020 award.
Together, we talk about what leaders can be doing to ensure they’re hiring from a diverse pool of talent, and where they can hire people beyond the usual recruitment channels. We also discuss how organizations can build a culture of mentoring so that members of diverse teams can feel valued, and retainment levels are strong.
Meeting Mike Hanley
Our CISO story for this episode is Cisco’s new Chief Information Security Officer, Mike Hanley.
Mike steps into the role of CISO for Cisco after spending five years with Cisco Duo. He originally joined to run Duo Labs, and was soon asked by Dug Song to be Vice President of Security and to build and nurture the team around him.
During our chat, Mike talks about what the past few months have been like after stepping into the role of CISO for Cisco in the middle of a global pandemic.
A very revealing note for me: I don’t think there was an answer that Mike gave where he didn’t refer to his team. People are clearly the most important aspect of his role, and in this interview you can see exactly why.
In fact, here’s a comment Mike shared that particularly struck a chord with me:
“I’m constantly in awe of the innovative ideas that the people in my team come up with to solve problems. I have middle-school teachers, designers, engineers, and many more fields of expertise in my team – and every single one of them has brought something really unique and significant.”
From the importance of hiring diverse talent, to building a culture of appreciation, openness and fun (he used the word fun six times in the first few minutes – I was keeping count!), Mike’s interview is a fascinating listen for anyone leading a team today.
Episode time stamps
02:27 Discussion on diversity in cybersecurity
46:49 Mike Hanley interview
1h 26: Closing remarks
Play the episode
The post Openness and support: Discussions on why diverse representation in cybersecurity matters appeared first on Cisco Blogs.
We are excited to announce the launch of our new partner training and certification paths, open to all authorized Veracode partners.
Based on partner feedback, we have designed these paths to provide a deeper understanding of the Veracode story and technical details around application security (AppSec). By enlisting in our training and certification paths, we enable partners to expand their business and support customers in developing a comprehensive AppSec program.
Some of the benefits of this new program:
- Free-of-charge, best-in-class trainings and certifications focused on AppSec.
- On-demand, self-paced paths that enable partners to learn what they want, when they want.
- Added visibility for individuals earning their certification with designated badges, showcasing the partner???s AppSec expertise.
- Greater access to leads and joint opportunities for partners with certified individuals.
These new training and certification paths give partners a choice of three levels of learning. Through on-demand, self-paced courses they can advance to the level of training that best suits their role ??? ultimately growing their business through application security offerings.
With this deeper level of knowledge, partners can expand their customer base, and sales and technical teams can support their prospects and customers more effectively in building and managing their AppSec program.
As always, we remain committed to our partners who do the important work of caring for customers ??? whether across the globe or in local and regional markets. We hope these new training and certification paths inspire further collaboration, increased business growth, and an even better experience for customers and prospects. ﾂ?
For more information on our partner training and certification, please contact your Veracode regional channel manager or send an email to firstname.lastname@example.org.
Microsoft’s security updates (the October 2020 Patch Tuesday) have been released. Patches have been rolled out for 87 security bugs. 11 out of the vulnerabilities addressed in this month’s Patch Tuesday received the “critical” ranking from Microsoft, meaning that cybercriminals or malware may leverage them to gain full access over an unpatched endpoint with little […]
The post Patch Tuesday (October 2020): Microsoft Fixes Wormable Remote Code Execution Vulnerability appeared first on Heimdal Security Blog.
Today's podcast looks at a new way ransomware is leveraging Android, and Carnival gives some information about its ransomware attack
Mandiant Threat Intelligence recently promoted a threat cluster to a named FIN (or financially motivated) threat group for the first time since 2017. We have detailed FIN11's various tactics, techniques and procedures in a report that is available now by signing up for Mandiant Advantage Free.
In some ways, FIN11 is reminiscent of APT1; they are notable not for their sophistication, but for their sheer volume of activity. There are significant gaps in FIN11’s phishing operations, but when active, the group conducts up to five high-volume campaigns a week. While many financially motivated threat groups are short lived, FIN11 has been conducting these widespread phishing campaigns since at least 2016. From 2017 through 2018, the threat group primarily targeted organizations in the financial, retail, and hospitality sectors. However, in 2019 FIN11’s targeting expanded to include a diverse set of sectors and geographic regions. At this point, it would be difficult to name a client that FIN11 hasn’t targeted.
Mandiant has also responded to numerous FIN11 intrusions, but we’ve only observed the group successfully monetize access in few instances. This could suggest that the actors cast a wide net during their phishing operations, then choose which victims to further exploit based on characteristics such as sector, geolocation or perceived security posture. Recently, FIN11 has deployed CLOP ransomware and threatened to publish exfiltrated data to pressure victims into paying ransom demands. The group’s shifting monetization methods—from point-of-sale (POS) malware in 2018, to ransomware in 2019, and hybrid extortion in 2020—is part of a larger trend in which criminal actors have increasingly focused on post-compromise ransomware deployment and data theft extortion.
Notably, FIN11 includes a subset of the activity security researchers call TA505, but we do not attribute TA505’s early operations to FIN11 and caution against using the names interchangeably. Attribution of both historic TA505 activity and more recent FIN11 activity is complicated by the actors’ use of criminal service providers. Like most financially motivated actors, FIN11 doesn’t operate in a vacuum. We believe that the group has used services that provide anonymous domain registration, bulletproof hosting, code signing certificates, and private or semi-private malware. Outsourcing work to these criminal service providers likely enables FIN11 to increase the scale and sophistication of their operations.
To learn more about FIN11’s evolving delivery tactics, use of services, post-compromise TTPs, and monetization methods, register for Mandiant Advantage Free. The full FIN11 report is also available through our FireEye Intelligence Portal (FIP). Then for even more information, register for our exclusive webinar on Oct. 29 where Mandiant threat intelligence experts will take a deeper dive into FIN11, including its origins, tactics, and potential for future activity.
Apple’s new iPhone 12, Prime day kicked off the holiday shopping season, and Disney is going to be making a historical change.
The post Hashtag Trending - Apple unveils new iPhone; Amazon Prime Day; Disney makes moves first appeared on IT World Canada.
The Workshop on Economics of Information Security will be online this year. Register here.
Vulnerability Description Tripwire VERT has identified a stack-based buffer overflow in SonicWall Network Security Appliance (NSA). The flaw can be triggered by an unauthenticated HTTP request involving a custom protocol handler. The vulnerability exists within the HTTP/HTTPS service used for product management as well as SSL VPN remote access. Exposure and Impact An unskilled attacker […]… Read More
The post SonicWall VPN Portal Critical Flaw (CVE-2020-5135) appeared first on The State of Security.
Detrimental lies are not new. Even misleading headlines and text can fool a reader. However, the ability to alter reality has taken a leap forward with “deepfake” technology which allows for the creation of images and videos of real people saying and doing things they never said or did. Deep learning techniques are escalating the technology’s finesse, producing even more realistic content that is increasingly difficult to detect.
Deepfakes began to gain attention when a fake pornography video featuring a “Wonder Woman” actress was released on Reddit in late 2017 by a user with the pseudonym “deepfakes.” Several doctored videos have since been released featuring high-profile celebrities, some of which were purely for entertainment value and others which have portrayed public figures in a demeaning light. This presents a real threat. The internet already distorts the truth as information on social media is presented and consumed through the filter of our own cognitive biases.
Deepfakes will intensify this problem significantly. Celebrities, politicians and even commercial brands can face unique forms of threat tactics, intimidation, and personal image sabotage. The risks to our democracy, justice, politics and national security are serious as well. Imagine a dark web economy where deepfakers produce misleading content that can be released to the world to influence which car we buy, which supermarket we frequent, and even which political candidate receives our vote. Deepfakes can touch all areas of our lives; hence, basic protection is essential.
How are Deepfakes Created?
Deepfakes are a cutting-edge advancement of Artificial Intelligence (AI) often leveraged by bad actors who use the technology to generate increasingly realistic and convincing fake images, videos, voice, and text. These videos are created by the superimposition of existing images, audio, and videos onto source media files by leveraging an advanced deep learning technique called “Generative Adversarial Networks” (GANs). GANs are relatively recent concepts in AI which aim to synthesize artificial images that are indistinguishable from authentic ones. The GAN approach brings two neural networks to work simultaneously: one network called the “generator” draws on a dataset to produce a sample that mimics it. The other network, known as the “discriminator”, assesses the degree to which the generator succeeded. Iteratively, the assessments of the discriminator inform the assessments of the generator. The increasing sophistication of GAN approaches has led to the production of ever more convincing and nearly impossible to expose deepfakes, and the result far exceeds the speed, scale, and nuance of what human reviewers could achieve.
McAfee Deepfakes Lab Applies Data Science Expertise to Detect Bogus Videos
To mitigate this threat, McAfee today announced the launch of the McAfee Deepfakes Lab to focus the company’s world-class data science expertise and tools on countering the deepfake menace to individuals, organizations, democracy and the overall integrity of information across our society. The Deepfakes Lab combines computer vision and deep learning techniques to exploit hidden patterns and detect manipulated video elements that play a key role in authenticating original media files.
To ensure the prediction results of the deep learning framework and the origin of solutions for each prediction are understandable, we spent a significant amount of time visualizing the layers and filters of our networks then added a model-agnostic explainability framework on top of the detection framework. Having explanations for each prediction helps us make an informed decision about how much we trust the image and the model as well as provide insights that can be used to improve the latter.
We also performed detailed validation and verification of the detection framework on a large dataset and tested detection capability on deepfake content found in the wild. Our detection framework was able to detect a recent deepfake video of Facebook’s Mark Zuckerberg giving a brief speech about the power of big data. The tool not only provided an accurate detection score but generated heatmaps via the model-agnostic explainability module highlighting the parts of his face contributing to the decision, thereby adding trust in our predictions.
Such easily available deepfakes reiterate the challenges that social networks face when it comes to policing manipulated content. As advancements in GAN techniques produce very realistic looking fake images, advanced computer vision techniques will need to be developed to identify and detect advanced forms of deepfakes. Additionally, steps need to be taken to defend against deepfakes by making use of watermarks or authentication trails.
Sounding the Alarm
We realize that news media do have considerable power in shaping people’s beliefs and opinions. As a consequence, their truthfulness is often compromised to maximize impact. The dictum “a picture is worth a thousand words” accentuates the significance of the deepfake phenomenon. Credible yet fraudulent audio, video, and text will have a much larger impact that can be used to ruin celebrity and brand reputations as well as influence political opinion with terrifying implications. Computer vision and deep learning detection frameworks can authenticate and detect fake visual media and text content, but the damage to reputations and influencing opinion remains.
In launching the Deepfakes Lab, McAfee will work with traditional news and social media organizations to identify malicious deepfakes videos during this crucial 2020 national election season and help combat this new wave of disinformation associated with deepfakes.
In our next blog on deepfakes, we will demonstrate our detailed detection framework. With this framework, we will be helping to battle disinformation and minimize the growing challenge of deepfakes.
To engage the services of the McAfee Deepfakes Lab, news and social media organizations may submit suspect video for analysis by sending content links to email@example.com.
The post The Deepfakes Lab: Detecting & Defending Against Deepfakes with Advanced AI appeared first on McAfee Blogs.