Category Archives: threat detection

Are Applications of AI in Cybersecurity Delivering What They Promised?

Many enterprises are using artificial intelligence (AI) technologies as part of their overall security strategy, but results are mixed on the post-deployment usefulness of AI in cybersecurity settings.

This trend is supported by a new white paper from Osterman Research titled “The State of AI in Cybersecurity: The Benefits, Limitations and Evolving Questions.” According to the study, which included responses from 400 organizations with more than 1,000 employees, 73 percent of organizations have implemented security products that incorporate at least some level of AI.

However, 46 percent agree that rules creation and implementation are burdensome, and 25 percent said they do not plan to implement additional AI-enabled security solutions in the future. These findings may indicate that AI is still in the early stages of practical use and its true potential is still to come.

How Effective Is AI in Cybersecurity?

“Any ITDM should approach AI for security very cautiously,” said Steve Tcherchian, chief information security officer (CISO) and director of product at XYPRO Technology. “There are a multitude of security vendors who tout AI capabilities. These make for great presentations, marketing materials and conversations filled with buzz words, but when the rubber meets the road, the advancement in technology just isn’t there in 2019 yet.”

The marketing Tcherchian refers to has certainly drummed up considerable attention, but AI may not yet be delivering enough when it comes to measurable results for security. Respondents to the Osterman Research study noted that the AI technologies they have in place do not help mitigate many of the threats faced by enterprise security teams, including zero-day and advanced threats.

Still Work to Do, but Promise for the Future

While applications of artificial intelligence must still mature for businesses to realize their full benefits, many in the industry still feel the technology offers promise for a variety of applications, such as improving the speed of processing alerts.

“AI has a great potential because security is a moving target, and fixed rule set models will always be evaded as hackers are modifying their attacks,” said Marty Puranik, CEO of Atlantic.Net. “If you have a device that can learn and adapt to new forms of attacks, it will be able to at least keep up with newer types of threats.”

Research from the Ponemon Institute predicted several benefits of AI use, including cost-savings, lower likelihood of data breaches and productivity enhancements. The research found that businesses spent on average around $3 million fighting exploits without AI in place. Those who have AI technology deployed spent an average of $814,873 on the same threats, a savings of more than $2 million.

Help for Overextended Security Teams

AI is also being considered as a potential point of relief for the cybersecurity skills shortage. Many organizations are pinched to find the help they need in security, with Cybersecurity Ventures predicting the skills shortage will increase to 3.5 million unfilled cybersecurity positions by 2021.

AI can help security teams increase efficiency by quickly making sense of all the noise from alerts. This could prove to be invaluable because at least 64 percent of alerts per day are not investigated, according to Enterprise Management Associates (EMA). AI, in tandem with meaningful analytics, can help determine which alerts analysts should investigate and discern valuable information about what is worth prioritizing, freeing security staff to focus on other, more critical tasks.

“It promises great improvements in cybersecurity-related operations, as AI releases security engineers from the necessity to perform repetitive manual processes and provides them with an opportunity and time to improve their skills, learn how to use new tools, technologies,” said Uladzislau Murashka, a certified ethical hacker (CEH) at ScienceSoft.

Note that while AI offers the potential for quicker, more efficient handling of alerts, human intervention will continue to be critical. Applications of artificial intelligence will not replace humans on the security team anytime soon.

Paving an Intelligent Path Forward

It’s important to consider another group that is investing in AI technology and using it for financial gains: cybercriminals. Along with enterprise security managers, those who make a living by exploiting sensitive data also understand the potential AI has for the future. It will be interesting to see how these capabilities play out in the future cat-and-mouse game of cybersecurity.

While AI in cybersecurity is still in the early stages of its evolution, its potential has yet to be fully realized. As security teams continue to invest in and develop AI technologies, these capabilities will someday be an integral part of cyberdefense.

The post Are Applications of AI in Cybersecurity Delivering What They Promised? appeared first on Security Intelligence.

Product showcase: Veriato Cerebral user & entity behavior analytics software

When it comes to identifying and stopping insider data security threats, actionable insights into people’s behaviors are invaluable. Employees involved in negative workplace events, contractors with access to critical systems and sensitive data, and departing employees all present elevated risks. Whether it’s a true insider exfiltrating data, or hackers leveraging compromised credentials to become an insider, behavior patterns can indicate both emerging and immediate risks to your security. Veriato Cerebral user & entity behavior analytics … More

The post Product showcase: Veriato Cerebral user & entity behavior analytics software appeared first on Help Net Security.

It’s Time to Modernize Traditional Threat Intelligence Models for Cyber Warfare

When a client asked me to help build a cyberthreat intelligence program recently, I jumped at the opportunity to try something new and challenging. To begin, I set about looking for some rudimentary templates with a good outline for building a threat intelligence process, a few solid platforms that are user-friendly, the basic models for cyber intelligence collection and a good website for describing various threats an enterprise might face. This is what I found:

  1. There are a handful of rudimentary templates for building a good cyberthreat intelligence program available for free online. All of these templates leave out key pieces of information that any novice to the cyberthreat intelligence field would be required to know. Most likely, this is done to entice organizations into spending copious amounts of money on a specialist.
  2. The number of companies that specialize in the collection of cyberthreat intelligence is growing at a ludicrous rate, and they all offer something that is different, unique to certain industries, proprietary, automated via artificial intelligence (AI) and machine learning, based on pattern recognition, or equipped with behavioral analytics.
  3. The basis for all threat intelligence is heavily rooted in one of three basic models: Lockheed Martin’s Cyber Kill Chain, MITRE’s ATT&CK knowledge base and The Diamond Model of Intrusion Analysis.
  4. A small number of vendors working on cyberthreat intelligence programs or processes published a complete list of cyberthreats, primary indicators, primary actors, primary targets, typical attack vectors and potential mitigation techniques. Of that small number, very few were honest when there was no useful mitigation or defensive strategy against a particular tactic.
  5. All of the cyberthreat intelligence models in use today have gaps that organizations will need to overcome.
  6. A search within an article content engine for helpful articles with the keyword “threat intelligence” produced more than 3,000 results, and a Google search produces almost a quarter of a million. This is completely ridiculous. Considering how many organizations struggle to find experienced cyberthreat intelligence specialists to join their teams — and that cyberthreats grow by the day while mitigation strategies do not — it is not possible that there are tens of thousands of professionals or experts in this field.

It’s no wonder why organizations of all sizes in a variety of industries are struggling to build a useful cyberthreat intelligence process. For companies that are just beginning their cyberthreat intelligence journey, it can be especially difficult to sort through all these moving parts. So where do they begin, and what can the cybersecurity industry do to adapt traditional threat intelligence models to the cyber battlefield?

How to Think About Thinking

A robust threat intelligence process serves as the basis for any cyberthreat intelligence program. Here is some practical advice to help organizations plan, build and execute their program:

  1. Stop and think about the type(s) of cyberthreat intelligence data the organization needs to collect. For example, if a company manufactures athletic apparel for men and women, it is unnecessary to collect signals, geospatial data or human intelligence.
  2. How much budget is available to collect the necessary cyberthreat intelligence? For example, does the organization have the budget to hire threat hunters and build a cyberthreat intelligence program uniquely its own? What about purchasing threat intelligence as a service? Perhaps the organization should hire threat hunters and purchase a threat intelligence platform for them to use? Each of these options has a very different cost model for short- and long-term costs.
  3. Determine where cyberthreat intelligence data should be stored once it is obtained. Does the organization plan to build a database or data lake? Does it intend to store collected threat intelligence data in the cloud? If that is indeed the intention, pause here and reread step one. Cloud providers have very different ideas about who owns data, and who is ultimately responsible for securing that data. In addition, cloud providers have a wide range of security controls — from the very robust to a complete lack thereof.
  4. How does the organization plan to use collected cyberthreat intelligence data? It can be used for strategic purposes, tactical purposes or both within an organization.
  5. Does the organization intend to share any threat intelligence data with others? If yes, then you can take the old cybersecurity industry adage “trust but verify” and throw it out. The new industry adage should be “verify and then trust.” Never assume that an ally will always be an ally.
  6. Does the organization have enough staff to spread the workload evenly, and does the organization plan to include other teams in the threat intelligence process? Organizations may find it very helpful to include other teams, either as strategic partners, such as vulnerability management, application security, infrastructure and networking, and risk management teams, or as tactical partners, such as red, blue and purple teams.

How Can We Adapt Threat Intelligence Models to the Cyber Battlefield?

As mentioned above, the threat intelligence models in use today were not designed for cyber warfare. They are typically linear models, loosely based on Carl Von Clausewitz’s military strategy and tailored for warfare on a physical battlefield. It’s time for the cyberthreat intelligence community to define a new model, perhaps one that is three-dimensional, nonlinear, rooted in elementary number theory and that applies vector calculus.

Much like game theory, The Diamond Model of Intrusion Analysis is sufficient if there are two players (the victim and the adversary), but it tends to fall apart if the adversary is motivated by anything other than sociopolitical or socioeconomic payoff, if there are three or more players (e.g., where collusion, cooperation and defection of classic game theory come into play), or if the adversary is artificially intelligent. In addition, The Diamond Model of Intrusion Analysis attempts to show a stochastic model diagram but none of the complex equations behind the model — probably because that was someone’s 300-page Ph.D. thesis in applied mathematics. This is not much help to the average reader or a newcomer to the threat intelligence field.

Nearly all models published thus far are focused on either external actors or insider threats, as though a threat actor must be one or the other. None of the widely accepted models account for, or include, physical security.

While there are many good articles about reducing alert fatigue in the security operations center (SOC), orchestrating security defenses, optimizing the SOC with behavioral analysis and so on, these articles assume that the reader knows what any of these things mean and what to do about any of it. A veteran in the cyberthreat intelligence field would have doubts that behavioral analysis and pattern recognition are magic bullets for automated threat hunting, for example, since there will always be threat actors that don’t fit the pattern and whose behavior is unpredictable. Those are two of the many reasons why the fields of forensic psychology and criminal profiling were created.

Furthermore, when it comes to the collection of threat intelligence, very few articles provide insight on what exactly constitutes “useful data,” how long to store it and which types of data analysis would provide the best insight.

It would be a good idea to get the major players in the cyberthreat intelligence sector together to develop at least one new model — but preferably more than one. It’s time for industry leaders to develop new ways of classifying threats and threat actors, share what has and has not worked for them, and build more boundary connections than the typical socioeconomic or sociopolitical ones. The sector could also benefit from looking ahead at what might happen if threat actors choose to augment their crimes with algorithms and AI.

The post It’s Time to Modernize Traditional Threat Intelligence Models for Cyber Warfare appeared first on Security Intelligence.

Why You Should Be Worried About London Blue’s Business Email Compromise Attacks

Phishing is nothing new, and efforts to train employees on how to detect and thwart phishing attacks should always be an essential component of any security awareness training program. But what happens when phishing attacks specifically target chief financial officers (CFOs)?

Researchers have discovered increasing evidence of a threat group named London Blue, a U.K.-based collective that focuses on CFOs at mortgage companies, accounting firms and some of the world’s largest banks. According to a report passed on to authorities by Agari, London Blue has collected email addresses for more than 50,000 senior-level targets in the U.S. and other countries, of which 71 percent hold a CFO title. The Agari report noted that London Blue operators have been utilizing email display name deception to trick senior employees into making fraudulent payments to the threat group’s accounts.

The ABCs of BEC

This type of attack, classified as business email compromise (BEC), builds on the typical phishing attack by taking the social engineering aspect to the next level — and sometimes includes elaborate hacking into email servers and the takeover of executive email accounts. But perhaps the most concerning feature of London Blue is that it is an organized cybercrime gang (OCCG) and, as such, works as efficiently as any modern corporation, with specific departments for lead generation, financial operations and human resources.

Crane Hassold, Agari’s senior director of threat research, explained that the report came about when London Blue targeted the company’s CFO for a potential BEC attack.

“Once that came in we started doing a little more digging, and there was a lot of active engagement with the scammers to understand more about them,” he said. It took Agari about four months of engagement after first observing the threat group to release the report.

BEC is a hot topic because it has been relatively successful. What’s really interesting to Hassold and his team is that the attack doesn’t require any technical means to get a result.

“When we think of cyberattacks, we think of things like malware-based attacks where there’s something technical that happened, but in this case, it’s pure social engineering,” said Hassold. Given his background with the Federal Bureau of Investigation (FBI)’s Behavioral Analysis Unit, Hassold is keenly aware that social engineering is the conduit to many cyberattacks.

“A lot of work has to go into them in order to make them successful, but the reasons we’re seeing these being used more commonly is that they’re relatively easy to do with no technical knowledge needed to send one of these things out,” he said. Even if these attacks have a success rate of less than 1 percent, Hassold noted, threat actors can still net tens of thousands of dollars a month.

The Simple, Yet Successful Tactics of London Blue

On a positive note, despite being so organized, groups like London Blue are still using old-school tactics such as the “Nigerian prince” scam, in which poor grammar and spelling are prominent. Red flags should be easy to spot. Yet, somehow, these scams still work on a very limited scale.

“They’re still around because they are successful enough,” said Hassold. “Even though most people would look at one of those things and ask ‘how could anyone actually fall for this?’, there’s always going to be a tiny population of people that will fall for it. They prey on central components of the human brain, like trust, fear and anxiety.” Those components are usually on overdrive when an employee gets an email he or she believes is coming from a CEO or CFO.

Not only have London Blue’s tactics remained the same over the last few years, but its BEC attack isn’t all that complicated. According to Agari’s report, the threat group uses a throwaway email address and changes the display name to match the CEO or CFO of a company. Attackers then send an email to the target financial executive — from their collection of email addresses — asking them to initiate a money transfer for some made-up reason. If London Blue gets a response from the victim, it replies with one or two bank accounts that they control for the money transfer.

Go Back to Security Basics

There’s no reason to believe that the rise in senior-level phishing attacks is going to stop anytime soon. So what are the best tactics to prevent this type of attack?

The easiest solution, of course, is to avoid clicking on links or attachments that appear suspicious. Even if an email seems to be legitimately coming from someone you know, it’s best to think twice before clicking or replying.

“We’ve been accustomed to just simply reacting or responding to emails,” said Hassold. “That’s how we do business, but I think part of what we need to do is take a second to stop and think about what we’re looking at before we take any action.”

Like anything related to security, doing your due diligence is a must, even for day-to-day emailing. While security awareness training for the C-suite is never a bad idea, in the case of a BEC attack, it may not be immediately helpful. Because these attacks have such a low overall success rate, you’d need a perfect 0 percent click rate in security awareness simulations to completely prevent them. Additionally, in Hassold’s experience, CEOs and CFOs are generally less receptive to security awareness training.

“They are extremely busy doing a lot of other different types of activities, so sitting down and having them learn about what the threats are to the business is difficult,” he explained.

CSOs and CISOs: Brush Up on Your Marketing Skills

Instead of awareness training, your chief security officer (CSO) or chief information security officer (CISO)’s time may be better spent making sure other executives understand cyber risks in a way that resonates with them — for example, by showing financial executives real-world incidents that have cost companies millions of dollars. No executive wants his or her company to be the next Maersk; the container shipping conglomerate lost up to $300 million and had to reinstall 45,000 PCs and 4,000 servers after being hit by NotPetya ransomware in 2017, according to ZDNet.

I recall having a long conversation about security awareness with the CSO of a large beverage company, who told me that when it comes to convincing other executives of the importance of security, you need to act like the marketing department and sell them on the concept. This CSO often has her team create pitch decks full of real-world examples to underscore the importance of proper security hygiene. This tactic can work wonders when executed effectively.

Don’t Underestimate the Threat of Business Email Compromise

For Hassold, the biggest takeaway from Agari’s report is how groups like London Blue acquire their information.

“These groups are using legitimate services used by sales teams all over the world to curate their targets,” he said.

Using popular sales prospecting tools, threat groups can narrow targets by granular demographics and export them into a nice CSV file. The report concluded that “the pure scale of the group’s target repository is evidence that BEC attacks are a threat to all businesses, regardless of size or location.” Agari also predicted that the use of legitimate services for malicious means will increase in the future.

Business email compromise attacks are clearly a major threat for IT and security leaders to keep an eye on as attackers continue upping their game and making their emails look more legitimate. A strong security culture, combined with a back-to-the-basics approach to security training, can help enterprises avoid being on the receiving end of a successful attack.

The post Why You Should Be Worried About London Blue’s Business Email Compromise Attacks appeared first on Security Intelligence.

Maximize Your Defenses by Fine-Tuning the Oscillation of Cybersecurity Incidents

Information security is an interesting field — or, perhaps more accurately, a constant practice. After all, we’re always practicing finding vulnerabilities, keeping threats at bay, responding to cybersecurity incidents and minimizing long-term business risks.

The thing is, it’s not an exact science. Some people believe that’s the case, but they are only fooling themselves. Some security professionals strive for perfection in terms of their documentation. Others want their users to make good decisions all the time. I’ve even had people ask if I could do my best to provide a clean vulnerability and penetration testing report when doing work for them. Scary stuff.

I believe we’ve reached this point of striving for perfection largely due to compliance. Rather than truly addressing security gaps, we’re stuck in the mindset of checking boxes so that someone, somewhere can get the impression that work is being done and all is well in IT. Striving for perfection only serves to skew expectations and set everyone involved up for failure. The reality is you’re never going to have a perfect state of security, but you can have reasonable security if you take the proper steps.

Ready, Set, Practice

To improve enterprise security, organizations must do what I refer to as fine-tuning the oscillation of their security program. What do I mean by that? Let me give you a car racing analogy.

I compete in the Spec Miata class with the Sports Car Club of America (SCCA). It’s a super-competitive class with very little room for mistakes. Everything that we do as Spec Miata racers has to be fined-tuned — that is, if we’re going to win. Everything matters, from how hard we get on the brakes to how quickly we turn the steering wheel to how we get on and off the throttle. Even the turn-in points and apexes of corners are extremely important. Each little thing we do either works in our favor or works against us.

In car racing, fine-tuning the oscillation means getting better and better at the little things over time. In other words, we minimize atypical events — the mistakes that would show up as spikes on a graph — and get more consistent the more we race. You can certainly make improvements throughout a single race, but most fine-tuning comes with experience and years of seat time.

Make Small Adjustments Over Time

Information security is no different. In the context of your overall security program, threats, vulnerabilities and subsequent cybersecurity incidents represent the oscillation. If you’re looking for a visual, fine-tuning the oscillation means minimizing the amplitude and maximizing the frequency of a sine wave to the point where you have a tiny squiggly line that represents your security events. It’s almost a straight line, but as I said before, there’s no such thing as perfection in security.

Instead of having low-hanging fruit such as missing patches and weak passwords, you’re staying on top of patch management and password policy enforcement. Instead of a lack of network visibility, you have systems and technologies in place that allow you to see things happening in real time. Instead of experiencing a security incident, you’re able to prevent or mitigate the threat. Instead of a breach, you have business as usual.

Rather than playing by the terms of malicious actors seeking to bring down your business, you are the one in control. This is all done through acknowledging your weaknesses and blind spots and making small adjustments over time.

Minimize the Impact of Cybersecurity Incidents

Start viewing your security program from this perspective by asking a few simple questions. What areas need the most attention? Do you have some quick wins that you could start with to get your momentum going? Most organizations have a handful of areas with known security gaps that are creating big exposures — things like third-party patching, unstructured (and unprotected) information scattered about networks, and user security awareness and training. Aim to quickly close the gaps that create the greatest risk so you can spend more focused time on the smaller, but more difficult, problems.

Stretching out that sine wave and fine-tuning the oscillation of impactful cybersecurity incidents should be your ultimate goal. Be it racing cars or running a security department, time, money and effort are the essential elements. If you’re going to do either one well, it’s going to require good information, solid decision-making, and intentional and disciplined practice over and over again. That’s the only way you’ll get better.

The post Maximize Your Defenses by Fine-Tuning the Oscillation of Cybersecurity Incidents appeared first on Security Intelligence.

Something In Common: Two Notorious Russian Speaking Hacking Groups Found Sharing Infrastructure With Each Other.

Kaspersky Lab experts have identified an overlap in cyberattacks between two infamous threat actors, GreyEnergy – which is believed to be a successor of BlackEnergy – and the Sofacy cyberespionage group. Both actors used the same servers at the same time, with, however, a different purpose.

BlackEnergy and Sofacy hacking groups are considered to be two of the major actors in the modern cyberthreat landscape. In the past, their activities often led to devastating national level consequences. BlackEnergy inflicted one of the most notorious cyberattacks in history with their actions against Ukrainian energy facilities in 2015, which led to power outages.

Meanwhile, Sofacy group caused havoc with multiple attacks against US and European governmental organisations, along with national security and intelligence agencies. It had previously been suspected that there was a connection between the two groups, but has not been proven until now, after GreyEnergy – BlackEnergy’s successor – was found to be using malware to attack industrial and critical infrastructure targets mainly in Ukraine, and demonstrated some strong architectural similarities with BlackEnergy.

Kaspersky Lab’s ICS CERT department, responsible for industrial systems threats research and elimination, found two servers hosted in Ukraine and Sweden, which were used by both threat actors at the same time in June 2018. GreyEnergy group used servers in their phishing campaign to store a malicious file. This file was downloaded by users as they opened a text document attached to a phishing e-mail. At the same time, Sofacy used the server as a command and control centre for their own malware. As both groups used the servers for a relatively short time, such a coincidence suggests a shared infrastructure. This was confirmed by the fact that both threat actors were observed to target one company a week after each other with spear phishing emails. What’s more, both groups used similar phishing documents under the guise of e-mails from the Ministry of Energy of the Republic of Kazakhstan.

“The compromised infrastructure found to be shared by these two threat actors potentially points to the fact that the pair not only have the Russian language in common, but that they also cooperate with each other. It also provides an idea of their joint capabilities and creates better picture of their plausible goals and potential targets. These findings add another important piece into public knowledge about GreyEnergy and Sofacy. The more the industry knows about their tactics, techniques and procedures, the better security experts can do their job in protecting customers from sophisticated attacks,” said Maria Garnaeva, security researcher at Kaspersky Lab ICS CERT.

To protect businesses from attacks from such groups, Kaspersky Lab suggests customers to:

Provide dedicated cybersecurity training for employees, educate them to always check the link address and the sender’s email before clicking anything.

Introduce security awareness initiatives, including gamified training with skills assessments and reinforcement through the repetition of simulated phishing attacks.

Automate operating systems, application software and security solutions updates on systems that are part of the IT, as well as enterprise’s industrial, network.

Deploy a dedicated protection solution, empowered with behavioural-based anti-phishing technologies, as well as anti-targeted attack technologies and threat intelligence, such as the Kaspersky Threat Management and Defense solution. These are capable of spotting and catching advanced targeted attacks by analysing network anomalies and giving cybersecurity teams full visibility over the network and response automation.

Read the full version of the Kaspersky Lab ICS CERT report here.

About Kaspersky Lab

Kaspersky Lab is a global cybersecurity company, which has been operating in the market for over 21 years. Kaspersky Lab’s deep threat intelligence and security expertise is constantly transforming into next generation security solutions and services to protect businesses, critical infrastructure, governments and consumers around the globe. The company’s comprehensive security portfolio includes leading endpoint protection and a number of specialised security solutions and services to fight sophisticated and evolving digital threats. Over 400 million users are protected by Kaspersky Lab technologies and we help 270,000 corporate clients protect what matters most to them.

Learn more at www.kaspersky.com.

The post Something In Common: Two Notorious Russian Speaking Hacking Groups Found Sharing Infrastructure With Each Other. appeared first on IT Security Guru.

What Can Consumers and IT Decision-Makers Do About the Threat of Malvertising?

If you haven’t already heard of malvertising, it’s one of the latest portmanteaus you’ll hear more about in 2019. Malvertising, or malicious advertising, is a type of online attack in which threat actors hide malicious code within an advertisement as a means to infect systems with malware. It works like any other type of malware, but can be found in ads across the internet — even legitimate websites such as The New York Times and BBC.

While these attacks have been around for several years, the rate at which they’re increasing is escalating, and the threat to the enterprise is getting more challenging to diagnose.

Frank Downs, director of cybersecurity practices at the Information Systems Audit and Control Association (ISACA), recognizes malvertising as the natural evolution of malware in today’s world of higher security.

“Leveraging traditional advertising capabilities, it makes it much easier for a malicious actor to seem legitimate,” he said.

Whether you’re at home, on a mobile device or sitting at your desktop at work, discerning which ads contain malware is difficult — especially compared to attacks such as phishing, where malicious messaging may be easier to detect.

So what can be done to educate both end users and IT decision-makers? Do workable strategies to defend against malvertising exist?

Ad-Blocking Software: The Ups and Downs of the Tried and True

While it’s easy to become discouraged given the perniciously stealthy nature of malvertising, it’s important to remember that ad-blocking software can handle a great deal of these threats by ensuring that most ads are never even presented to the user.

“Solutions exist which range from simple browser plugins, such as AdBlock Plus, to advanced traffic filtering tools,” said Downs.

He went on to single out an open-source, community-led initiative that’s gained some traction among cyber enthusiasts: Pi-hole.

“These devices are cheap, easily configured, community-developed systems which run on small Raspberry Pi devices. They block over 100,000 advertising domains and have gained an avid following online, making them more effective every day,” Downs explained.

However, Pi-hole isn’t for everyone. Most enterprises only need to deploy ad-blocking software and stop users from disabling it. If a valid use case requires a user to access a specific website, the security team should be alerted so they can determine the next course of action. The downside with this option is that it’s cumbersome and not user-friendly, resulting in users calling support teams to complain about how their workflow is negatively impacted.

“The reality is, no amount of user training is going to stop the problem. Enterprise CXOs have enough to concern themselves with,” said Sherban Naum, senior vice president of corporate strategy and technology for Bromium. “Malvertising is a pain that can be easily remedied by isolating the entire session, allowing a user the freedom to surf the web without the risk of compromise.”

Naum said he is seeing more customers taking the isolation route to remove the user from the decision tree when it comes to real-time runtime security.

Where Does the Buck Stop?

This is all practical for the well-informed enterprise, but end-user awareness is critical as malvertising proliferates. As it stands, users generally lack understanding of how ads and malware work together.

While it’s easy to place the onus on ad-blocking software providers, the issue is surrounded by complexity and extends beyond ad blockers. Because legitimate webpages benefit financially from ads, they’re asking users to disable ad blockers to access their site.

“The practice of asking users to disable a security product for their own benefit is troubling,” said Naum. “Ad blocker companies are doing the right thing to block ads, but users are left with making a decision to either maintain the ad blocker or disable it, as most see legitimate, well-known categorized websites as safe.”

What users may not be aware of is that these large sites are fed by hundreds of random servers that aren’t under the control of the top-level domain provider. This leaves users, employees and consumers as the final security decision-makers, which is anything but optimal.

“What would help is if large sites didn’t prompt users to disable security tools but rather let the visitor access the site and focus more on delivering their service than earning revenue on ads,” Naum said.

Return to Security Best Practices to Deal With Malvertising

That’s obviously easier said than done. If the threat of malvertising shows no signs of slowing down, sites that run ads may face the unfortunate dilemma of having to choose between revenue or keeping visitors safe. Until that happens, it’s our responsibility to be informed and do what we can.

To accomplish this, we must come to terms with the fact that we can’t stop the unknown or trust systems that are entirely out of our control. Further, enterprises must stop relying on legacy architectures and systems to identify attacks.

“Once you have accepted that you need to isolate the untrusted, then happy clicking on malware isn’t an issue and cybercrime is less effective,” said Naum. “However, perhaps the best way of looking at this holistically is that there will always be cybercrime and the enterprise needs to focus on what they are doing to ensure their users are not a victim.”

Malvertising is one more threat that will keep your IT decision-makers up at night, but any company with a protection-first mindset should be able to remain ahead of the curve. Security awareness training for the user may yield limited results in stopping this threat, but in cases like this, a security-minded C-suite will always be ahead of the game.

The post What Can Consumers and IT Decision-Makers Do About the Threat of Malvertising? appeared first on Security Intelligence.

Bring Order to Chaos By Building SIEM Use Cases, Standards, Baselining and Naming Conventions

Security operations centers (SOCs) are struggling to create automated detection and response capabilities. While custom security information and event management (SIEM) use cases can allow businesses to improve automation, creating use cases requires clear business logic. Many security organizations lack efficient, accurate methods to distinguish between authorized and unauthorized activity patterns across components of the enterprise network.

Even the most intelligent SIEM can fail to deliver value when it’s not optimized for use cases, or if rules are created according to incorrect parameters. Creating a framework that can accurately detect suspicious activity requires baselines, naming conventions and effective policies.

Defining Parameters for SIEM Use Cases Is a Barrier to SOC Success

Over the past few years, I’ve consulted with many enterprise SOCs to improve threat detection and incident response capabilities. Regardless of SOC maturity, most organizations struggle to accurately define the difference between authorized and suspicious patterns of activity, including users, admins, access patterns and scripts. Countless SOC leaders are stumped when they’re asked to define authorized patterns of activity for mission-critical systems.

SIEM rules can be used to automate detection and response capabilities for common threats such as distributed denial-of-service (DDoS), authentication failures and malware. However, these rules must be built on clear business logic for accurate detection and response capabilities. Baseline business logic is necessary to accurately define risky behavior in SIEM use cases.

Building a Baseline for Cyber Hygiene

Cyber hygiene is defined as the consistent execution of activities necessary to protect the integrity and security of enterprise networks, including users, data assets and endpoints. A hygiene framework should offer clear parameters for threat response and acceptable use based on policies for user governance, network access and admin activities. Without an understanding of what defines typical, secure operations, it’s impossible to create an effective strategy for security maintenance.

A comprehensive framework for cybersecurity hygiene can simplify security operations and create guidelines for SIEM use cases. However, capturing an effective baseline for systems can strengthen security frameworks and create order in chaos. To empower better hygiene and threat detection capabilities based on business logic, established standards such as a naming convention can create clear parameters.

VLAN Network Categories

For the purpose of simplified illustration, imagine that your virtual local area networks (VLANs) are categorized among five criticality groups — named A, B, C, D and E — with the mission-critical VLAN falling into the A category (<vlan_name>_A).

A policy may be created to dictate that A-category VLAN systems can communicate directly with any other category without compromising data security. However, communication with the A-category VLAN from B, C, D or E networks is not allowed. Authentication to a jump host can accommodate authorized exceptions to this standard, such as when E-category users need access to an A-category server.

Creating a naming convention and policy for VLAN network categories can help you develop simple SIEM use cases to prevent unauthorized access to A resources and automatically detect suspicious access attempts.

Directory Services and Shared Resources

You can also use naming convention frameworks to create a policy for managing groups of user accounts according to access level in directory services, such as Lightweight Directory Access Protocol (LDAP) or Active Directory (AD). A standardized naming convention for directory services provides a clear framework for acceptable user access to shared folders and resources. AD users categorized within the D category may not have access to A-category folders or <shared_folder_name>_A.

Creating effective SIEM rules based on these use cases is a bit more complex than VLAN business logic since it involves two distinct technologies and potentially complex policies for resource access. However, creating standards that connect user access to resources establishes clear parameters for strict, contextual monitoring. Directory users with A-category access may require stricter change monitoring due to the potential for abuse of admin capabilities. You can create SIEM use cases to detect other configuration mistakes, such as a C-category user who is suddenly escalated to A-category.

Username Creation

Many businesses are already applying some logic to standardize username creation for employees. A policy may dictate that users create a seven-character alias that involves three last-name characters, two first-name characters and two digits. Someone named Janet Doe could have the username DoeJa01, for example. Even relatively simple username conventions can support SIEM use cases for detecting suspicious behavior. When eight or more characters are entered into a username field, an event could be triggered to lock the account until a new password is created.

The potential SIEM use cases increase with more complex approaches to username creation, such as 12-character usernames that combine last- and first-name characters with the employee’s unique HR-issued identification. A user named Jonathan Doerty, for instance, could receive an automatically generated username of doertjo_4682. Complex usernames can create friction for legitimate end users, but some minor friction can be justified if it provides greater safeguards for privileged users and critical systems.

An external threat actor may be able to extrapolate simple usernames from social engineering activities, but they’re unlikely to guess an employee’s internal identification number. SIEM rules can quickly detect suspicious access attempts based on username field entries that lack the required username components. Requiring unique identification numbers from HR systems can also significantly lower the risk of admins creating fake user credentials to conceal malicious activity.

Unauthorized Code and Script Locations

Advanced persistent threats can evade detection by creating backdoor access to deploy a carefully disguised malicious code. Standard naming conventions provide a cost-effective way to create logic to detects malware risks. A simple model for script names could leverage several data components, such as department name, script name and script author, resulting in authorized names like HR_WellnessLogins_DoexxJo. Creating SIEM parameters for acceptable script names can automate the detection of malware.

Creating baseline standards for script locations such as /var/opt/scripts and C:\Program Files\<org_name>\ can improve investigation capabilities when code is detected that doesn’t comply with the naming convention or storage parameters. Even the most sophisticated threat actors are unlikely to perform reconnaissance on enterprise naming convention baselines before creating a backdoor and hiding a script. SIEM rules can trigger a response from the moment a suspiciously named script begins to run or a code file is moved into an unauthorized storage location.

Scaling Security Response With Standards

Meaningful threats to enterprise data security often fly under the radar of even the most sophisticated threat detection solutions when there’s no baseline to define acceptable activity. SOC analysts have more technological capabilities than ever, but many are struggling to optimize detection and response with effective SIEM use cases.

Clear, scalable systems to define policies for acceptable activity create order in chaos. The smartest approach to creating effective SIEM use cases relies on standards, a strong naming convention and sound policy. It’s impossible to accurately understand risks without a clear framework for authorized activities. Standards, baselines and naming conventions can remove barriers to effective threat detection and response.

The post Bring Order to Chaos By Building SIEM Use Cases, Standards, Baselining and Naming Conventions appeared first on Security Intelligence.

Stay Ahead of the Growing Security Analytics Market With These Best Practices

As breach rates climb and threat actors continue to evolve their techniques, many IT security teams are turning to new tools in the fight against corporate cybercrime. The proliferation of internet of things (IoT) devices, network services and other technologies in the enterprise has expanded the attack surface every year and will continue to do so. This evolving landscape is prompting organizations to seek out new ways of defending critical assets and gathering threat intelligence.

The Security Analytics Market Is Poised for Massive Growth

Enter security analytics, which mixes threat intelligence with big data capabilities to help detect, analyze and mitigate targeted attacks and persistent threats from outside actors as well as those already inside corporate walls.

“It’s no longer enough to protect against outside attacks with perimeter-based cybersecurity solutions,” said Hani Mustafa, CEO and co-founder of Jazz Networks. “Cybersecurity tools that blend user behavior analytics (UBA), machine learning and data visibility will help security professionals contextualize data and demystify human behavior, allowing them to predict, prevent and protect against insider threats.”

Security analytics can also provide information about attempted breaches from outside sources. Analytics tools work together with existing network defenses and strategies and offer a deeper view into suspicious activity, which could be missed or overlooked for long periods due to the massive amount of superfluous data collected each day.

Indeed, more security teams are seeing the value of analytics as the market appears poised for massive growth. According to Global Market Insights, the security analytics market was valued at more than $2 billion in 2015, and it is estimated to grow by more than 26 percent over the coming years — exceeding $8 billion by 2023. ABI Research put that figure even higher, estimating that the need for these tools will drive the security analytics market toward a revenue of $12 billion by 2024.

Why Are Security Managers Turning to Analytics?

For most security managers, investment in analytics tools represents a way to fill the need for more real-time, actionable information that plays a role in a layered, robust security strategy. Filtering out important information from the massive amounts of data that enterprises deal with daily is a primary goal for many leaders. Businesses are using these tools for many use cases, including analyzing user behavior, examining network traffic, detecting insider threats, uncovering lost data, and reviewing user roles and permissions.

“There has been a shift in cybersecurity analytics tooling over the past several years,” said Ray McKenzie, founder and managing director of Red Beach Advisors. “Companies initially were fine with weekly or biweekly security log analytics and threat identification. This has morphed to real-time analytics and tooling to support vulnerability awareness.”

Another reason for analytics is to gain better insight into the areas that are most at risk within an IT environment. But in efforts to cull important information from a wide variety of potential threats, these tools also present challenges to the teams using them.

“The technology can also cause alert fatigue,” said Simon Whitburn, global senior vice president, cybersecurity services at Nominet. “Effective analytics tools should have the ability to reduce false positives while analyzing data in real-time to pinpoint and eradicate malicious activity quickly. At the end of the day, the key is having access to actionable threat intelligence.”

Personalization Is Paramount

Obtaining actionable threat intelligence means configuring these tools with your unique business needs in mind.

“There is no ‘plug and play’ solution in the security analytics space,” said Liviu Arsene, senior cybersecurity analyst at Bitdefender. “Instead, the best way forward for organizations is to identify and deploy the analytics tools that best fits an organization’s needs.”

When evaluating security analytics tools, consider the company’s size and the complexity of the challenges the business hopes to address. Organizations that use analytics may need to include features such as deployment models, scope and depth of analysis, forensics, and monitoring, reporting and visualization. Others may have simpler needs with minimal overhead and a smaller focus on forensics and advanced persistent threats (APTs).

“While there is no single analytics tool that works for all organizations, it’s important for organizations to fully understand the features they need for their infrastructure,” said Arsene.

Best Practices for Researching and Deploying Analytics Solutions

Once you have established your organization’s needs and goals for investing in security analytics, there are other important considerations to keep in mind.

Emphasize Employee Training

Chief information security officers (CISOs) and security managers must ensure that their staffs are prepared to use the tools at the outset of deployment. Training employees on how to make sense of information among the noise of alerts is critical.

“Staff need to be trained to understand the results being generated, what is important, what is not and how to respond,” said Steve Tcherchian, CISO at XYPRO Technology Corporation.

Look for Tools That Can Change With the Threat Landscape

Security experts know that criminals are always one step ahead of technology and tools and that the threat landscape is always evolving. It’s essential to invest in tools that can handle relevant data needs now, but also down the line in several years. In other words, the solutions must evolve alongside the techniques and methodologies of threat actors.

“If the security tools an organization uses remain stagnant in their programming and update schedule, more vulnerabilities will be exposed through other approaches,” said Victor Congionti of Proven Data.

Understand That Analytics Is Only a Supplement to Your Team

Analytics tools are by no means a replacement for your security staff. Having analysts who can understand and interpret data is necessary to get the most out of these solutions.

Be Mindful of the Limitations of Security Analytics

Armed with security analytics tools, organizations can benefit from big data capabilities to analyze data and enhance detection with proactive alerts about potential malicious activity. However, analytics tools have their limitations, and enterprises that invest must evaluate and deploy these tools with their unique business needs in mind. The data obtained from analytics requires context, and trained staff need to understand how to make sense of important alerts among the noise.

The post Stay Ahead of the Growing Security Analytics Market With These Best Practices appeared first on Security Intelligence.

Most attacks against energy and utilities occur in the enterprise IT network

The United States has not been hit by a paralyzing cyberattack on critical infrastructure like the one that sidelined Ukraine in 2015. That attack disabled Ukraine's power grid, leaving more than 700,000 people in the dark.

But the enterprise IT networks inside energy and utilities networks have been infiltrated for years. Based on an analysis by the U.S. Department of Homeland Security (DHS) and FBI, these networks have been compromised since at least March 2016 by nation-state actors who perform reconnaissance activities looking industrial control system (ICS) designs and blueprints to steal.