The US Treasury placed sanctions on three North Korea-linked hacking groups, the Lazarus Group, Bluenoroff, and Andarial.
The groups are behind several hacking operations that resulted in the theft of hundreds of millions of dollars from financial institutions and
According to the Treasury, the three groups “likely” stole $571 million in
Intelligence analysts believe the groups are under the control of the Reconnaissance General Bureau, which is North Korea’s primary intelligence bureau.
“Treasury is taking action against North Korean hacking groups that have been perpetrating cyber attacks to support illicit weapon and missile programs,” said Sigal Mandelker, Treasury Under Secretary for Terrorism and Financial Intelligence.
“We will continue to enforce existing US and UN sanctions against North Korea and work with the international community to improve
The activity of the Lazarus Group surged in 2014 and 2015, its members used mostly custom-tailored malware in their attacks and experts that investigated on the crew consider it highly sophisticated.
This threat actor has been active since at least 2009, possibly as early as 2007, and it was involved in both cyber espionage campaigns and sabotage activities aimed to destroy data and disrupt systems.
“According to industry and press reporting, by 2018, Bluenoroff had attempted to steal over $1.1 billion dollars from financial institutions and, according to press reports, had successfully carried out such operations against banks in Bangladesh, India, Mexico, Pakistan, Philippines, South Korea, Taiwan, Turkey, Chile, and Vietnam.” continues the US Treasury.
Andariel carried out cyber attacks against online gambling and poker sites.
The sanctions placed by the US Treasury aim to lock the access to the global financial system and to freeze any assets held under US jurisdiction.
“As a result of today’s action, all property and interests in property of these entities, and of any entities that are owned, directly or indirectly, 50 percent or more
(SecurityAffairs – North Korea, hacking)
The post The US Treasury placed sanctions on North Korea linked APT Groups appeared first on Security Affairs.
Symantec Axes Hundreds of US Jobs
American software giant Symantec is cutting hundreds of jobs at four different sites across the US as part of a $100 million restructuring program.
Government filings of notices made by the company in August under the Worker Adjustment and Retraining Notification (WARN) Act indicate that the roles of 230 Symantec employees will be terminated on October 15, 2019.
The company's Californian headquarters at Mountain View will bear the brunt of the losses, with 152 job cuts expected. In San Francisco 18 jobs will go, and a further 24 will be axed from the company's site in Springfield, Oregon. In Culver City, Los Angeles County, 36 positions will be scrapped. Employees were notified in early August.
The cuts will affect many different job classifications but most of the roles targeted were primarily related to tech work. According to the Employment Development Department (EDD) filings made by Symantec in California, many software engineer and software development engineer jobs are to go along with a raft of middle-management positions.
In a letter which accompanied the filings, Symantec wrote: “Layoffs are expected to be permanent," before stating, "None of the affected employees are represented by a union, and no bumping rights exist."
Symantec, which supplies 50 million people with Norton antivirus software and LifeLock identity theft protection, has over 11,000 employees globally. The US job cuts are part of a planned 7% reduction in Symantec's international workforce announced last month alongside news of the company's $10.7 billion sale of its enterprise division to San Jose chipmaker Broadcom.
News of the cuts come amid rumors that Symantec has received interest from two private-equity suitors who, according to the Wall Street Journal, are seeking to buy the cybersecurity firm for more than $16 billion.
The Journal reported that "Permira and Advent International Corp. recently approached Symantec proposing a takeover deal valuing Symantec at $26 to $27 a share that would hand them the company’s consumer operation while preserving the sale of its enterprise business to Broadcom Inc."
With the sale of its enterprise arm to Broadcom pending, it's not clear how the proposed deal would work if it was to go ahead.
cryptocurrency-mining botnet tracked as WatchBog is heavily using the Pastebin service for command and control (C&C) operations.
Cisco Talos researchers discovered a new
The WatchBog bot is a Linux-based malware that
“Cisco Incident Response (CSIRS) recently responded to an incident involving the Watchbog
“This Linux-based malware relied heavily on Pastebin for command and control (C2) and operated openly. CSIRS gained an accurate understanding of the attacker’s intentions and abilities on a customer’s network by analyzing the various Pastebins.”
The new WatchBog variant includes a new spreader module along with exploits for the following recently patched vulnerabilities in Linux applications:
- CVE-2019-11581 (Jira)
- CVE-2019-10149 (Exim)
- CVE-2019-0192 (Solr)
- CVE-2018-1000861 (Jenkins)
- CVE-2019-7238 (Nexus Repository Manager 3)
The malware also includes scanners for Jira and Solr flaws along with Brute-forcing module for CouchDB and Redis installs.
The operators behind the WatchBog
“During the investigation, Cisco IR found signs of hosts becoming a part of a separate
During the installation phase, the bot checks for running processes associated with other
Then determines whether it can write to various directories, checks the system architecture, and then makes three attempts to download and install a ‘kerberods’ dropper using wget or curl. .
The installation script also retrieves the contents of a Pastebin URL containing a Monero wallet ID and mining information, then it downloads the miner. The script also checks if the ‘
The script downloads encoded Pastebins as a text file and gives it execution permissions. The script finally
The ‘download’ function performs similar operations by writing the contents retrieved from various file locations, once determined the target architecture it installs the appropriate miner.
The WatchBog uses SSH for lateral movements, a specific script also checks for the existence of SSH keys into the target systems in the attempt to use it while targeting other systems.
The post WatchBog cryptomining botnet now uses Pastebin for C2 appeared first on Security Affairs.
Cybersecurity Firm Employees Charged with Burglary of Courthouse Client
Two employees of a Colorado cybersecurity firm hired to test the security of an Iowa courthouse have been charged with burglary after allegedly breaking into the building.
Gary Edward Demercurio, 43, of Seattle, Wash., and Justin Lawson Wynn, 29, of Naples, Fla., were arrested at approximately 1 a.m. on Wednesday morning after being found inside the Dallas County Courthouse in possession of burglary tools.
Dallas County deputy sheriffs arrived at the scene after an alarm at the courthouse at 908 Court Street in Adel was tripped.
Demercurio and Wynn, who both work for global cybersecurity firm Coalfire, have been charged with third-degree burglary and possession of burglary tools.
At the time of their arrest, Demercurio and Wynn told Dallas County deputy sheriffs that "they were contracted to break into the building for Iowa courts to check the security of the building."
In a press release issued later that day, Iowa Judicial Branch confirmed that while the state court administration had hired cybersecurity firm Coalfire to carry out security testing, the midnight shenanigans allegedly committed by Wynn and Demercurio were not exactly what it had in mind.
While the administration had asked Coalfire to test vulnerabilities in the the state’s electronic records system, it "did not intend, or anticipate, those efforts to include the forced entry into a building."
"It’s a strange case," said Dallas County Sheriff Chad Leonard on Wednesday. "We’re still investigating this thing."
When contacted for comment, Coalfire replied with the following statement: "Coalfire is a global cybersecurity firm that has conducted over 10,000 security assessments since 2001. We have performed hundreds of assessments for similar government agencies, and our employees work diligently to ensure our engagements are conducted with utmost integrity and in alignment with the objectives of our client.
"However, we cannot comment on this situation or any specific client engagements due to the confidential nature of our work and various security and privacy laws. Additionally, we cannot comment on this specific case as it is an active legal matter."
Demercurio was released from Dallas County Jail after posting a $57,000 bond. Wynn was likewise released after posting a bond of $50,000. Both men are scheduled to appear before Dallas County District Court for a preliminary hearing on September 23.
All of life is based on the coordinated action of genetic parts (genes and their controlling sequences) found in the genomes (the complete DNA sequence) of organisms.
Genes and genomes are based on code-- just like the digital language of computers. But instead of zeros and ones, four DNA letters --- A, C, T, G -- encode all of life. (Life is messy, and there are actually all sorts of edge cases, but ignore that for now.) If you have the sequence that encodes an organism, in theory, you could recreate it. If you can write new working code, you can alter an existing organism or create a novel one.
If this sounds to you a lot like software coding, you're right. As synthetic biology looks more like computer technology, the risks of the latter become the risks of the former. Code is code, but because we're dealing with molecules -- and sometimes actual forms of life -- the risks can be much greater.
Imagine a biological engineer trying to increase the expression of a gene that maintains normal gene function in blood cells. Even though it's a relatively simple operation by today's standards, it'll almost certainly take multiple tries to get it right. Were this computer code, the only damage those failed tries would do is to crash the computer they're running on. With a biological system, the code could instead increase the likelihood of multiple types of leukemias and wipe out cells important to the patient's immune system.
We have known the mechanics of DNA for some 60 plus years. The field of modern biotechnology began in 1972 when Paul Berg joined one virus gene to another and produced the first "recombinant" virus. Synthetic biology arose in the early 2000s when biologists adopted the mindset of engineers; instead of moving single genes around, they designed complex genetic circuits.
In 2010 Craig Venter and his colleagues recreated the genome of a simple bacterium. More recently, researchers at the Medical Research Council Laboratory of Molecular Biology in Britain created a new, more streamlined version of E. coli. In both cases the researchers created what could arguably be called new forms of life.
This is the new bioengineering, and it will only get more powerful. Today you can write DNA code in the same way a computer programmer writes computer code. Then you can use a DNA synthesizer or order DNA from a commercial vendor, and then use precision editing tools such as CRISPR to "run" it in an already existing organism, from a virus to a wheat plant to a person.
In the future, it may be possible to build an entire complex organism such as a dog or cat, or recreate an extinct mammoth (currently underway). Today, biotech companies are developing new gene therapies, and international consortia are addressing the feasibility and ethics of making changes to human genomes that could be passed down to succeeding generations.
Within the biological science community, urgent conversations are occurring about "cyberbiosecurity," an admittedly contested term which exists between biological and information systems where vulnerabilities in one can affect the other. These can include the security of DNA databanks, the fidelity of transmission of those data, and information hazards associated with specific DNA sequences that could encode novel pathogens for which no cures exist.
These risks have occupied not only learned bodies -- the National Academies of Sciences, Engineering, and Medicine published at least a half dozen reports on biosecurity risks and how to address them proactively -- but have made it to mainstream media: genome editing was a major plot element in Netflix's Season 3 of "Designated Survivor."
Our worries are more prosaic. As synthetic biology "programming" reaches the complexity of traditional computer programming, the risks of computer systems will transfer to biological systems. The difference is that biological systems have the potential to cause much greater, and far more lasting, damage than computer systems.
Programmers write software through trial and error. Because computer systems are so complex and there is no real theory of software, programmers repeatedly test the code they write until it works properly. This makes sense, because both the cost of getting it wrong and the ease of trying again is so low. There are even jokes about this: a programmer would diagnose a car crash by putting another car in the same situation and seeing if it happens again.
Even finished code still has problems. Again due to the complexity of modern software systems, "works properly" doesn't mean that it's perfectly correct. Modern software is full of bugs -- thousands of software flaws -- that occasionally affect performance or security. That's why any piece of software you use is regularly updated; the developers are still fixing bugs, even after the software is released.
Bioengineering will be largely the same: writing biological code will have these same reliability properties. Unfortunately, the software solution of making lots of mistakes and fixing them as you go doesn't work in biology.
In nature, a similar type of trial and error is handled by "the survival of the fittest" and occurs slowly over many generations. But human-generated code from scratch doesn't have that kind of correction mechanism. Inadvertent or intentional release of these newly coded "programs" may result in pathogens of expanded host range (just think swine flu) or organisms that wreck delicate ecological balances.
Unlike computer software, there's no way so far to "patch" biological systems once released to the wild, although researchers are trying to develop one. Nor are there ways to "patch" the humans (or animals or crops) susceptible to such agents. Stringent biocontainment helps, but no containment system provides zero risk.
Opportunities for mischief and malfeasance often occur when expertise is siloed, fields intersect only at the margins, and when the gathered knowledge of small, expert groups doesn't make its way into the larger body of practitioners who have important contributions to make.
Good starts have been made by biologists, security agencies, and governance experts. But these efforts have tended to be siloed, in either the biological and digital spheres of influence, classified and solely within the military, or exchanged only among a very small set of investigators.
What we need is more opportunities for integration between the two disciplines. We need to share information and experiences, classified and unclassified. We have tools among our digital and biological communities to identify and mitigate biological risks, and those to write and deploy secure computer systems.
Those opportunities will not occur without effort or financial support. Let's find those resources, public, private, philanthropic, or any combination. And then let's use those resources to set up some novel opportunities for digital geeks and bionerds -- as well as ethicists and policymakers -- to share experiences, concerns, and come up with creative, constructive solutions to these problems that are more than just patches.
These are overarching problems; let's not let siloed thinking or funding get in the way of breaking down barriers between communities. And let's not let technology of any kind get in the way of the public good.
This essay previously appeared on CNN.com.
ESET researchers found an undocumented backdoor used by the infamous Stealth Falcon group, an operator of targeted spyware attacks against journalists, activists and dissidents in the Middle East. With the launch of the Safer Kids online initiative, a guide to help parents protect their kids when they take selfie. The discovery of a serious vulnerability
MSOE Opens Cyber-Learning Center Built with $34m Alumnus Donation
A Wisconsin university today celebrated the grand opening of a new cyber-learning facility funded by a $34m donation from a former student and his wife.
Dwight Diercks graduated from the Milwaukee School of Engineering (MSOE) in 1990 with a degree in computer science and engineering. Now senior vice president of software engineering at California-based technology company NVIDIA, Diercks today serves as a regent of the university, which awarded him an honorary engineering doctorate in 2014.
A day-long program of events was held to mark the opening of the Dwight and Dian Diercks Computational Science Hall, which included a keynote address by Jensen Huang, founder, president, and CEO of NVIDIA.
According to the MSOE website, "Diercks Hall—and the courses taught within—position MSOE at the educational forefront in artificial intelligence (AI), deep learning, cyber security, robotics, cloud computing and other next-generation technologies."
The four-floor building features seven contemporary classrooms, nine innovative teaching laboratories, 25 offices for staff, and a 256-seat auditorium. At the heart of the hall is a state-of-the-art data center with an NVIDIA GPU-accelerated AI supercomputer, which is named Rosie after the women known as Rosies who programmed one of the earliest computers, the ENIAC. Rosie is also the name of Diercks' mother, who passed away in 2006.
On the building's third floor, the Caspian Cyber Security Laboratory will allow students to conduct real-world cybersecurity experiments and test defensive mechanisms in a professional and controlled environment. The room is grounded with special shielding paint and an electromagnetic field to prevent computer viruses that students are working on from spreading to the rest of campus through the wireless network.
The substantial donation given by Diercks and his wife, Dian, was bolstered with an additional $4m contributed by several individuals and corporations to support long-term operations and maintenance of the facility.
Speaking at today's live-streamed opening ceremony, held in the new hall's atrium, the mayor of Milwaukee, Tom Barrett, quipped, "When I first heard the words artificial intelligence I thought someone had heard I had inflated my SAT scores," before declaring Friday, September 13, 2019, to be Dwight and Dian Diercks Day throughout the entire city of Milwaukee.
After Diercks and his wife cut a red ribbon with a giant pair of scissors to officially open the hall, he shared with the crowd his pleasure at learning that the addition of an external staircase to the building had increased the facility's final size to a square footage of 65,536, which is the number of different values representable in a number of 16 bits.
Since 1993, hackers have traveled to Las Vegas from around the world to demonstrate their skills at DefCon’s annual convention, and every year new horrors of cyber-insecurity are revealed as they wield their craft. Last year, for example, an eleven-year-old boy changed the election results on a replica of the Florida state election website in under ten minutes.
This year was no exception. Participants revealed all sorts of clever attacks and pathetic vulnerabilities. One hack allowed a convention attendee to commandeer control of an iPhone with a non-Apple-issue charging cord, one that is identical to the Apple version. Another group figured out how to use a Netflix account to steal banking information. But for our purposes, let’s focus on election security because without it democracy is imperiled. And if you think about it, what are the odds of something like DefCon being permitted in the People’s Republic of China?
Speaking of China (or Russia or North Korea or Iran or…) will the 2020 election be hacked?
In a word: Yes.
In 2016 Russia targeted elections systems in all 50 states.
A CNN article about DefCon’s now annual Voting Village, described the overall problem: Many election officials and key players in the election business are not sufficiently worried to anticipate, recognize and meet the challenges ahead.
While many organizations welcome the hijinks of DefCon participants — including the Pentagon — the voting machine manufacturers don’t generally seem eager to have hackers of any stripe show them where they are vulnerable… and that should worry you.
DefCon participants are instructed to break things, and they do just that. This year, Senator Ron Wyden (D-Ore.) toured DefCon’s Voting Village and he left with these words: “We need paper ballots, guys.”
Was the Senator right? It’s the easiest solution, but not the only one. Because elections machines are thus far preeminently breakable, we still need audited paper trails.
Paper trails are mission critical
After railing against previous findings of DefCon participants, Election Systems and Software (ES&S) CEO Tom Burt reversed his position in a Roll Call op-ed that called for paper records and mandatory machine testing in order to secure e-voting systems. It’s a welcome move as far as cybersecurity experts are concerned.
After a midterm election featuring irregularities in Georgia, North Carolina and other smaller hacks, and warnings from the likes of Special Prosecutor Robert Mueller, there has been no meaningful action nationwide when it comes to election security, while the specter of serious interference remains. Senate Majority Leader Mitch McConnell (R-Ky.) has steadfastly refused to allow even bi-partisan election security legislation to come to the floor for a vote, much less a debate, and for that reason he and the Republican party are blameworthy for placing politics above protecting our most cherished democratic right.
While the news is on overheated cycles covering every tweet, or sound bite, uttered by President Trump, critical issues like cybersecurity are not being addressed, and this matters — given recent DefCon news of election machines connected to the internet when they shouldn’t be, and the persistent threat of state-sponsored attacks on our democracy.
Think DARPA’s $10 million un-hackable election machine proves all is well? Not quite. Bugs during the set up of the DARPA wonder machine meant that DefCon’s participants didn’t have enough time to properly break the thing. In the absence of definitive proof to the contrary, we have to assume it can be hacked.
It is well-established fact that Russia attempted to interfere in the 2016 election in all 50 states, and Israel — an ally of the president — recently disclosed that the Russian government identified President Trump as the candidate most likely to benefit Russia, and used cyberbots to help him win. The fact that President Trump won the election on the strength of just 80,000 votes spread across three key swing states shows how important it is to address the issue. We’re not talking about a blunderbuss approach to hacking the election here. Plausible outcomes can be constructed. It’s been known to happen before.
Some experts think it may soon be too late to secure 2020 against the threat of state-sponsored hacks. I do not. But I think the time to delay to score political points has passed, and now is the time for action.
The post Prediction: 2020 election is set to be hacked, if we don’t act fast appeared first on Adam Levin.
The United States and its allies and partners should stop worrying about the risk of authoritarians splitting the Internet.
Instead, they should split it themselves, by creating a digital bloc within which data, services, and products can flow freely, excluding countries that do not respect freedom of expression or privacy rights, engage in disruptive activity, or provide safe havens to cybercriminals...
The league would not raise a digital Iron Curtain; at least initially, most Internet traffic would still flow between members and nonmembers, and the league would primarily block companies and organizations that aid and abet cybercrime, rather than entire countries.
Governments that fundamentally accept the idea of an open, tolerant, and democratic Internet but that struggle to live up to such a vision would have an incentive to improve their enforcement efforts in order join the league and secure connectivity for their companies and citizens.
Of course, authoritarian regimes in China, Russia, and elsewhere will probably continue to reject that vision.
Instead of begging and pleading with such governments to play nice, from now on, the United States and its allies should lay down the law: follow the rules, or get cut off.
My initial reaction to this line of thought was not encouraging. Rather than continue exchanging Twitter messages, Rob and I had a very pleasant phone conversation to help each other understand our points of view. Rob asked me to document my thoughts in a blog post, so this is the result.
Rob explained that the main goal of the IFL is to create leverage to influence those who do not implement an open, tolerant, and democratic Internet (summarized below as OTDI). I agree that leverage is certainly lacking, but I wondered if the IFL would accomplish that goal. My reservations included the following.
1. Many countries that currently reject the OTDI might only be too happy to be cut off from the Western Internet. These countries do not want their citizens accessing the OTDI. Currently dissidents and others seeking news beyond their local borders must often use virtual private networks and other means to access the OTDI. If the IFL went live, those dissidents and others would be cut off, thanks to their government's resistance to OTDI principles.
2. Elites in anti-OTDI countries would still find ways to access the Western Internet, either for personal, business, political, military, or intelligence reasons. The common person would be mostly likely to suffer.
3. Segregating the OTDI would increase the incentives for "network traffic smuggling," whereby anti-OTDI elites would compromise, bribe, or otherwise corrupt Western Internet resources to establish surreptitious methods to access the OTDI. This would increase the intrusion pressure upon organizations with networks in OTDI and anti-OTDI locations.
4. Privacy and Internet freedom groups would likely strongly reject the idea of segregating the Internet in this manner. They are vocal and would apply heavy political pressure, similar to recent net neutrality arguments.
5. It might not be technically possible to segregate the Internet as desired by the IFL. Global business does not neatly differentiate between Western and anti-OTDI networks. Similar to the expected resistance from privacy and freedom groups, I expect global commercial lobbies to strongly reject the IFL on two grounds. First, global businesses cannot disentangle themselves from anti-OTDI locations, and second, Western businesses do not want to lose access to markets in anti-OTDI countries.
Rob and I had a wide-ranging discussion, but these five points in written form provide a platform for further analysis.
What do you think about the IFL? Let Rob and I know on Twitter, via @robknake and @taosecurity.
Turns out it's actually a sunny day in Oslo today, although it's the last one I'll see here for quite some time before heading off to Denmark then other European things for the remainder of this trip. I'm talking a little about those events (all listed on my events page), this week's changes to EV, more data breaches and a somewhat semantic argument about the definition of "theft".
- Entrust are convinced you should still pay them for EV certs (even though the primary value proposition they're still promoting is now gone...)
- Scott killed a million bucks worth of EV certs (it turns out that extended validation isn't always so... extended)
- The Void.to hacking forum got breached and is now in HIBP (a lot of private messages in there people really wouldn't want being traced back to them)
- Garmin in South Africa had a whole bunch of credit cards siphoned off (looks like a classic Magecart attack)
- Does a data breach actually constitute "theft" given the original owner isn't deprived of it? (that's a link to the Twitter thread on it, I think the term is a bit overloaded TBH)
- Sponsored by Okta: You wouldn’t roll your own hashing algorithm, so why build your own auth? Secure users in mins with a free dev account.
Worldwide concern is increasing over the adverse effects that deepfakes could have on society, and for good reason. Recently, the employee of an energy company based in the UK was tricked into thinking he was talking on the phone with his boss, the CEO of the German parent company, who asked him to transfer $243,000 to a Hungarian supplier. Of course, the employee was not speaking with the actual CEO, but with a scammer who was impersonating the real CEO through voice-altering AI.
This kind of social engineering attack is not new. In fact, merely two months ago, cybersecurity researchers identified three successful deepfake audio attacks on companies. Their “CEO” called a financial officer to ask for an urgent money transfer. The voices of the real CEO had been taken from earnings calls, YouTube videos, TED talks, and other recordings, and inserted into an AI program which enabled fraudsters to imitate the voices.
These types of incidents are the audio version of what are known as deepfake videos, which have been causing global panic for the past couple of years. As we become accustomed to the existence of deepfakes, this may affect our trust in any videos we see or audio footage we hear, including the real ones. Videos, which once used to be the ultimate form of truth that transcended edited pictures that can be easily altered, can now deceive us as well.
And this brings us to the question:
How safe is your business in the face of the deepfake threat?
What are Deepfakes?
Deepfakes are fake video and audio footage of individuals, that are meant to make them look like they have said and done things which, in fact, they haven’t. “Deep” relates to the “deep learning” technology used to produce the media and “fake” to its artificial nature. Most of the time, the faces of people are superimposed on the bodies of others, or their actual figure is altered in such a way that it appears to be saying and doing something that they never did.
The term was born in 2017 when a Reddit user posted a fake adult video showing the faces of some Hollywood celebrities. Later, the user also published the machine learning code used to create the video.
Can we detect and stop Deepfakes?
Right now, researchers and companies are investigating how they can utilize AI to distinguish and wipe out deepfakes. New advancements have started to rise that are meant to help us identify which pictures and recordings are real and which are fake.
For example, Facebook, Microsoft, the Partnership on AI coalition, and academics from several universities are launching a contest to help improve the detection of deepfakes. They aim to encourage people to produce a technology that can be used by anyone to detect when deepfake material has been created. The Deepfake Detection Challenge will feature a data set and leaderboard, alongside grants and awards, to motivate participants to design new methods of identifying and stopping fake footage meant to deceive others.
Yet, this won’t prevent the fake media from being created, shared, seen and heard by millions of people before it is removed. And without doubt, it can be extremely difficult to face the consequences and repair the damage once malicious materials get distributed.
How can you spot Deepfake videos?
Until some highly reliable technical solutions are designed, we should learn to identify the tell-tale signs of deepfakes. So, here are the flaws you should be looking for:
- Blinking – According to research, the eye blinking in videos seems to be not that well presented in deepfake videos.
- Head position – Watch out for blurry face borders that subtly blend into the background.
- Artificially-looking skin – If the face looks extra smooth like it’s been edited, this may be another warning sign. Also, watch out for the skin tone that can be slightly different than the rest of the body.
- Slow speech and different intonation – Sometimes, you will notice the one who is being impersonated talks rather slowly or there isn’t quite a match between the real person’s voice and the fake one.
- An overall strange look and feel – In the end, you should trust your instinct. Sometimes, you can simply tell something’s not right.
At the moment, one can easily spot deepfakes. But in the future, as this technology progresses, it will gradually become more difficult.
Deepfakes could destroy everything
Here is what deepfakes could have a highly negative impact on:
Deepfakes could influence elections since they can put words into politicians’ mouths and make them look like they’ve done or said certain things which, in fact, they haven’t. Deepfake producers could target popular social media channels, where the content shared can instantly become viral.
Fake evidence for criminal trials could be used against people in court and this way, they could become accused of crimes they did not commit. Thus, the wrong people could go to jail. And on the other hand, people who are guilty could be set free based on false proof.
#3. Stock market
Deepfakes could be used to manipulate stock prices when altered footage of influential people making certain statements gets distributed. Imagine what would happen if a fake video of the CEOs of companies such as Apple, Amazon, or Google declaring they’ve done something illegal. For instance, back in 2008, Apple’s stock dropped 10 points based on a false rumor that Steve Jobs had suffered a major heart attack emerged.
#4. Online bullying
The deepfake technology could also be used to amplify cyberbullying, especially since it’s now becoming widely available. People can easily turn into victims when manipulated media of them is posted online. Or they can get blackmailed by cybercriminals who are threatening leak the footage if, for instance, they don’t pay a certain amount of money.
Someone could be making false statements about your business to destabilize and degrade it. Malicious actors could make it look like you or someone within your organization admitting to having been involved in consumer fraud, bribery, sexual abuse, and any other wrongdoings you can think of. Obviously, these kinds of false statements can destroy your company’s reputation and make it difficult for you to prove otherwise.
What can be done?
Due to the current gaps in the law, producers of deepfakes are not incriminated. However, the Deepfakes Accountability Act (known as “Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act – yes, you’ve correctly identified an acronym right there) aims to take measures to criminalize this type of fake media.
In short, anyone who creates deepfakes would be required to reveal that the footage is altered. And if they fail to do so, it will be considered a crime. The existence of these kinds of regulations is mandatory to protect deepfake victims and also the general public from distorted information.
How can you protect your business from Deepfakes?
Your competitors could resort to deepfake blackmail in order to try to eliminate you from the industry.
No matter how good technological deepfake detection solutions will become, they won’t prevent manipulated media from being shared and reach large numbers of people. So, the best way is to teach your employees how to identify fake footage and question everything that seems suspicions inside the organization.
#1. Train your employees
The topic of deepfakes can be looked at during your cybersecurity training. For instance, if they receive an unexpected call from the CEO who is asking them to transfer $1 million to a bank account, they could, first of all, question if the person on the other line is who they say they are. Maybe, a good countermeasure would be to have a few security questions in place that need to be asked to verify a caller’s identity.
#2. Monitor your brand’s online presence
Your brand’s presence is probably already being monitored online. So, make sure your designated people keep an eye on fake content involving your organization and if anything suspicious is brought to light, they do their best to take it down as soon as possible and mitigate the damage.
This brings us to the next point.
#3. Be transparent
If you become a victim of deepfakes, ensure that your audience is aware of the targeted attack. Trying to ignore what happened or assume that people didn’t believe what they’ve seen or heard won’t make the issue disappear. Therefore, your PR efforts should be centered around communicating that someone from your company has been impersonated and highlighting the artificial nature of the distributed footage.
Never let misinformation erode your public’s confidence!
Wrapping it all up
The dangers of deefakes are real and should not be underestimated. A single ill-intended rumor could destroy your business. So, you, both as an individual and an organization, should be prepared to stand against these threats.
#44CON: GPS Trackers Hacked to Make Premium Rate Calls
Speaking at 44CON, Pen Test Partners researchers Tony Gee and Vangelis Stykas demonstrated vulnerabilities in GPS trackers, which enabled them to call premium rate phone numbers, and possibly influence the outcome of television talent shows.
Gee said that there is demand for GPS trackers, which are used in watches for kids, cars and even on pets’ collars, but their research had found consistent API vulnerabilities. Gee said that the problems were in “a lot of common APIs and used across platforms” in IoT products that were available cheaply.
Stykas called one product range “a monstrosity,” saying that the research into Thinkrace technology found that most API calls did not require authentication, and all users start with the default password “123456.” There were at least 370 vulnerable devices, across 80 domains on 40 different servers, which Stykas said allows anyone to be tracked, with a hacker able to change the email and take over the device, and force a firmware update.
Calling it a “classic horizontal escalation of privilege,” Stykas said that the vendor had not responded to vulnerability disclosures for three years “on multiple attempts.”
In further research, Gee said that a lot of the GPS devices, particularly tracker watches for kids, used a pay-as-you-go SIM card, and allowed for a premium rate phone line to be called. “If we own the number, we make the money,” he said, pointing out that the costs of setting up a number only runs into hundreds of pounds, but regulation by the PSA was strong on doing this.
Looking at the options of hacking a GPS tracker to enable text voting to a premium line, Gee said that a typical SMS vote is 35p, so with a £10 top up you could vote 28 times. If there are 25 million vulnerable devices, that can enable seven billion votes. While he admitted that the voting at the annual Eurovision song contest could not be influenced because of the jury system, it was possible to influence talent shows like X Factor and Britain’s Got Talent. This would also allow the attacker to gamble on who the winner would be.
Talking on the disclosure, Gee said that the UK’s main four providers (o2, Vodafone, EE and 3) have a default “on” for premium lines to be called. Meanwhile, the vendors have been notified but “most products are not fixed and multiple devices have the same flaws.” However, the PSA have responded and said that Pen Test Partners will be invited to review changes.
Gee concluded by saying that most trackers will not be fixed, but manufacturers “need to get better” as “authentication is not authorization.”
The Independent Commission on Examination Malpractice in the UK has recommended that all watches be banned from exam rooms, basically because it's becoming very difficult to tell regular watches from smart watches.
Welcome to our weekly roundup, where we share what you need to know about the cybersecurity news and events that happened over the past few days. This week, learn how fileless malware abuses PowerShell. Also, read how Trend Micro researchers are pulling back the curtain on the cybercriminal underground to warn consumers and businesses about potential threats against IoT devices.
Trend Micro researchers from around the globe monitored five different cybercriminal undergrounds and, given the amount of chatter, found that there is no doubt that IoT devices, mainly routers, are certainly a target.
Researchers share a proof of concept showing how a use-after-free vulnerability in Internet Explorer can be fully and consistently exploited in Windows 10 RS5. The flaw was discovered through BinDiff and addressed in Microsoft’s September Patch Tuesday.
The newest iteration of Purple Fox that researchers came across, being delivered by Rig, retains its rookit component by abusing publicly available code and now eschews its use of NSIS in favor of abusing PowerShell, making Purple Fox capable of fileless infection. This blog discusses features of this malware and security recommendations to avoid these types of threats.
Trend Micro ensures its family of products is progressively enhanced to meet the needs of consumers and the Trend Micro Security 2020 Fall Release is no exception. Endpoint and network security products are improved to provide the most advanced protections from persistent, new, and emerging threats.
As cities become smarter, officials and security experts say that current defenses are unlikely to keep hackers at bay. Ideas for making cyber defenses smarter include reducing reliance on passwords and open-sourcing security standards to benefit from the perspective of a wider range of security professionals.
Continuing the trend from last month, several critical patches were for Remote Desktop Clients – all Remote Code Execution (RCE) vulnerabilities. Microsoft also patched two zero-days which are both elevation of privilege vulnerabilities.
Social engineering is by far the biggest factor in malicious hacking campaigns and nearly all successful email-based cyberattacks require the target to open files, click on links, or carry out some other action. While many of these attacks are designed to look highly legitimate, there are ways to identify what could potentially be a malicious attack.
CEOs of 51 companies from the Business Roundtable, including Amazon, IBM and Salesforce, signed a letter to U.S. congressional leaders urging them to create a comprehensive consumer data privacy law.
Wikipedia confirmed that it was hit by a malicious DDoS attack that took it offline across many countries. Following the attack, the Wikipedia Foundation received a $2.5M donation from Craigslist founder, Craig Newmark, to further expand security programs.
The medical provider noted that the malware restricted employee’s access to their systems and data and has officially revealed the approximate number of affected patients in a disclosure to the federal government.
Cyber criminals are increasingly turning their attention to hacking Internet of Things devices as connected products proliferate. While routers remain the top target for IoT-based cyberattacks, there’s a lot of discussion in underground forums about compromising internet-connected gas pumps.
Enhanced Trend Micro Security protects inboxes from scams and phishing attacks
Trend Micro announced the latest version of its flagship consumer offering, Trend Micro Security, which features enhanced protection from web threats and a new AI-powered Fraud Buster tool to protect Gmail and Outlook inboxes across the globe.
Cybercriminals who held to ransom the files of 22 Texas local government units for a combined ransom amount of US$2.5 million did not get a single cent thanks to a coordinated state and federal cyber response plan.
Are you well-versed on Trend’s suggestions for protecting your routers and other devices from malware? Share your thoughts in the comments below or follow me on Twitter to continue the conversation: @JonLClay.
The post This Week in Security News: IoT Devices Are a Target in Cybercriminal Underground appeared first on .
The term “hacking” has become the talk of the town, with one new incidence of hacking being reported every single day. The internet is in for a spin as cases of hacking are getting reported on a global level, triggering the realization that anything and everything with a vulnerable spot…
AI and machine learning offer tremendous promise for humanity in terms of helping us make sense of Big Data. But, while the processing power of these tools is integral for understanding trends and predicting threats, it’s not sufficient on its own.
Thoughtful design of threat intelligence—design that accounts for the ultimate needs of its consumers—is essential too. There are three areas where thoughtful design of AI for cybersecurity increases overall utility for its end users.
Designing where your data comes from
To set the process of machine learning in motion, data scientists rely on robust data sets they can use to train models that deduce patterns. If your data is siloed, it relies on a single community of endpoints or is made up only of data gathered from sensors like honeypots and crawlers. There are bound to be gaps in the resultant threat intelligence.
A diverse set of real-world endpoints is essential to achieve actionable threat intelligence. For one thing, machine learning models can be prone to picking up biases if exposed to either too much of a particular threat or too narrow of a user base. That may make the model adept at discovering one type of threat, but not so great at noticing others. Well-rounded, globally-sourced data provides the most accurate picture of threat trends.
Another significant reason real-world endpoints are essential is that some malware excels at evading traditional crawling mechanisms. This is especially common for phishing sites targeting specific geos or user environments, as well as for malware executables. Phishing sites can hide their malicious content from crawlers, and malware can appear benign or sit on a user’s endpoint for extended periods of time without taking an action.
Designing how to illustrate data’s context
Historical trends help to gauge future measurements, so designing threat intelligence that accounts for context is essential. Take a major website like www.google.com for example. Historical threat intelligence signals it’s been benign for years, leading to the conclusion that its owners have put solid security practices in place and are committed to not letting it become a vector for bad actors. On the other hand, if we look at a domain that was only very recently registered or has a long history of presenting a threat, there’s a greater chance it will behave negatively in the future.
Illustrating this type of information in a useful way can take the form of a reputation score. Since predictions about a data object’s future actions—whether it be a URL, file, or mobile app—are based on probability, reputation scores can help determine the probability that an object may become a future threat, helping organizations determine the level of risk they are comfortable with and set their policies accordingly.
Designing how you classify and apply the data
Finally, how a threat intelligence provider classifies data and the options they offer partners and users in terms of how to apply it can greatly increase its utility. Protecting networks, homes, and devices from internet threats is one thing, and certainly desirable for any threat intelligence feed, but that’s far from all it can do.
Technology vendors designing a parental control product, for instance, need threat intelligence capable of classifying content based on its appropriateness for children. And any parent knows malware isn’t the only thing children should be shielded from. Categories like adult content, gambling sites, or hubs for pirating legitimate media may also be worthy of avoiding. This flexibility extends to the workplace, too, where peer-to-peer streaming and social media sites can affect worker productivity and slow network speeds, not to mention introduce regulatory compliance concerns. Being able to classify internet object with such scalpel-like precision makes thoughtfully designed threat intelligence that is much more useful for the partners leveraging it.
Finally, the speed at which new threat intelligence findings are applied to all endpoints on a device is critical. It’s well-known that static threat lists can’t keep up with the pace of today’s malware, but updating those lists on a daily basis isn’t cutting it anymore either. The time from initial detection to global protection must be a matter of minutes.
This brings us back to where we started: the need for a robust, geographically diverse data set from which to draw our threat intelligence. For more information on how the Webroot Platform draws its data to protect customers and vendor partners around the globe, visit our threat intelligence page.
Ransomware Closes Arizona School District
As many students began returning for the fall semester, classes were cancelled in the Flagstaff Unified School District in Arizona after a ransomware attack disabled some of the district’s computer systems. Officials haven’t yet released any additional information on the ransom demanded or if any sensitive employee or student documents was compromised. The attack is another in a chain of ransomware campaigns affecting dozens of school districts around the country in recent months.
BEC Scam Targets Toyota Corporation
A subsidiary company of Toyota fell victim to a business email compromise (BEC) that could cost more than $37 million. Using social engineering to convince the victim to send the wire transfer has become a common practice around the world and earned scammers an estimated $1.3 billion in 2018 alone. Officials are still working to determine the proper course of action to recover the stolen funds, though it is unlikely they will be able to track down their present location.
International BEC Sting Nets 281 Arrests
With the cooperation of many law enforcement agencies around the world, at least 281 individuals were taken into custody for their roles in various BEC scams. Along with the arrests, officials seized $3.7 million in cash that had been stolen by redirecting wire transfers while posing as a high-level executive. While the majority of arrests came from Europe and Africa, nearly a quarter occurred in the U.S.
LokiBot Campaign Affects U.S. Manufacturer
A poorly written email phishing campaign was recently discovered with a rather malicious payload called LokiBot. In the scam, once a victim would open the attachment (with assurances in the email that it simply needs to be reviewed), an archive would unzip and allow the payload to begin hunting for credentials and any other sensitive information stored on the system. After reviewing the LokiBot sample, the IP address from which the campaign originated from has been tied to several other, similar campaigns from recent months.
Oklahoma State Trooper Pension Fund Stolen
Malicious hackers recently stole more than $4.2 million from the Oklahoma State Trooper’s pension fund, which was to be used to assist roughly 1,500 retired law enforcement agents in the state. While most of the benefits programs should remain unaffected, officials are confident that they will be able to recover the funds, which would also be covered by insurance company if unable to be recovered.
The post Cyber News Rundown: Arizona School Ransomware Attack appeared first on Webroot Blog.
#44CON: Establishing a Mental Health Toolbox
Noting the warning lights to assess your levels of stress and mental health now, and in the future, can save a lot of anguish in your working life.
Speaking at 44CON in London on the issue of dealing with mental health, Duo Security CISO advisory group member J Wolfgang Goerlich recommended a strategy of a “career owners manual” and knowing what to do to “make sure you have got a career and what you’re doing well.”
He recommended having a the right state of health to be able to thrive in what he called a “good community,” where we need to be supportive of others, as “a lot of us struggle.”
Goerlich advised taking a back seat, stepping back from work for a few months and to avoid being afraid of duplicating work.
When looking at yourself in a current position, he recommended taking the following steps:
- Look at how your culture fits the company culture. Are we happy with the people in our organization “and do they make us feel good?”
- Are our values reflected in theirs, and do we feel good about ourselves when we look in the mirror or do we feel like we are compromising ourselves?
- Are the tasks we are doing good?
- Is diversity good where we work, as diversity beings different perspective and points of view
“You need to be sure the inputs line up, as different companies have different values” he said, as if we are unhappy, it is too easy to ignore warning lights around our mental health, and it is too easy to take a “teenager’s action” as they ignore warning lights on a car. These warning lights should be around:
- Physiological effects
- Non-competitive compensation
- Lack of training
- Lack of career path
- Poor teamwork
- Poor leadership
- No appreciation or recognition
- Misaligned values and culture
In terms of tools, Goerlich recommended relaxing, recharging and re-learning, and doing “what is good for you.” This included time off work, what Goerlich called “zero days,” to recharge. The steps to take to recharge are as follows:
Weekly: prepare for the week ahead, do the “basic things,” de-stress and energize, and review the previous week.
Monthly: review stress, check warning lights, and schedule “zero days.”
Quarterly: check your health, review accomplishments, review learning, plan for next quarter, and schedule time off.
Annually: annual job reviews, and annually review your job.
Decade: asses who you are now, what you enjoy now, and where is the job market going?
“Make sure you have got the tools in your toolbox and are doing maintenance on your career,” he concluded. “This [cybersecurity] is a fantastic career and industry, but we see too many people struggle.”
Marketer Exposes 198 Million Car Buyer Records
Another unprotected Elasticsearch database has been discovered by researchers, this time exposing personally identifiable information (PII) linked to 198 million car buying records.
The privacy snafu was discovered back in August by Jeremiah Fowler, researcher at SecurityDiscovery.
The non-password protected database contained a massive 413GB of data on potential car buyers, including names, email addresses, phone numbers, home addresses and more stored in plain text.
Also left publicly accessible were IP addresses, ports, pathways, and storage info “that cyber-criminals could exploit to access deeper into the network,” he explained.
Fowler spent several days trying to locate the owner of the database, which contained information from multiple websites.
“Only by manually reviewing multiple domains did I discover that they all linked back to dealerleads.com,” he added. “I was able to speak with the general sales manager who was concerned and professional with getting the information secured and public access was closed shortly after my notification by phone.”
As the name suggests, Dealer Leads provides online marketing support in the form of prospective car buyers for dealerships around the US. It's unknown how long the data was exposed for.
“It is unclear if Dealer Leads has notified individuals, dealerships, or authorities about the data incident. Because of the size and scope of the network applicants and potential customers may not know if their data was exposed,” Fowler warned.
“Also, when contacting a local dealership in their hometown about a specific automobile they may not have known that the website actually collected their data as a lead or that this data could potentially be stored, saved, sold, or shared via DealerLeads.”
The incident is just the latest in a long line of privacy leaks via Elasticsearch, AWS S3, and other online platforms, due to security misconfigurations.
In recent months, Honda exposed 134 million company documents, a leading Chinese uni leaked 8TB of email metadata, and Dow Jones left a sensitive global watchlist of criminals and terrorists open to the public — all via misconfigured Elasticsearch instances.
Iranian Threat Group Targets 380 Global Universities
An Iranian threat group exposed last year has been detected targeted hundreds of universities in over 30 countries in a global phishing operation.
Cobalt Dickens has been linked to indictments last year against nine Iranian nationals who worked for the Mabna Institute. They allegedly stole more than 31TB of data from over 140 US universities, 30 US companies and five government agencies, alongside more than 176 universities in 21 other countries.
The Secureworks Counter Treat Unit this week claimed their activity has not declined despite the publicity given to the indictments; in fact, it discovered a new campaign similar to the group's August 2018 phishing raids, using free online services and publicly available tools.
Specifically, the group uses compromised university resources to send spoofed library-themed emails containing links to log-in pages designed to harvest user credentials.
Some 20 new domains were registered in Australia, the United States, the United Kingdom, Canada, Hong Kong, and Switzerland using the Freenom domain provider. Many use valid SSL certificates issued by Let’s Encrypt to add further authenticity to the phishing campaigns.
Continuing the theme of using publicly available resources to carry out these attacks, the group utilized the SingleFile plugin available on GitHub and the free HTTrack Website Copier standalone application, to copy the login pages of targeted university resources, according to Secureworks.
The researchers claimed that metadata in the spoofed web pages indicates the attackers are of Iranian origin. At least 380 universities worldwide have apparently been targeted in this latest campaign.
“Some educational institutions have implemented multi-factor authentication (MFA) to specifically address this threat,” it concluded.
“While implementing additional security controls like MFA could seem burdensome in environments that value user flexibility and innovation, single-password accounts are insecure. CTU researchers recommend that all organizations protect Internet-facing resources with MFA to mitigate credential-focused threats.”
Universities are an increasingly popular target for nation state attackers looking for highly sensitive research to advance homegrown development programs.
Sophos plans to open source Sandboxie, a relatively popular Windows utility that allows users to run applications in a sandbox. Until that happens, they’ve made the utility free. About Sandboxie Sandboxie creates a virtual container in which untrusted programs can be run or installed so that they can’t maliciously modify the underlying OS or data on the host machine. If can make the use of apps such as browsers, email programs, IM clients, Office suites, … More
The post Sandboxie becomes freeware, soon-to-be open source appeared first on Help Net Security.
Mirai and SMB Attacks Dominate 1H 2019
Attacks on IoT devices using Mirai and its variants and raids against the Windows SMB protocol dominated the first half of 2019, according to new data from F-Secure.
The Finnish security vendor analyzed its global network of honeypots to find the number of “attack events” in the first six months of 2019 was 12 times higher than the same period in 2018.
The largest share, 760 million events, came via the Telnet protocol, followed by 611 million events on UPnP, both of which are used by connected devices.
The malware found in F-Secure’s honeypots was predominantly versions of Mirai, the infamous strain which searches for exposed IoT endpoints before cracking those open that are protected only by default credentials.
SMB port 445 also featured strongly, with 556 million events. This indicates continued interest on the part of cyber-criminals in exploiting the protocol targeted by the WannaCry hackers. According to F-Secure, it remains popular due to the high number of unpatched servers around the world.
In fact, Kaspersky data from last November revealed that WannaCry hit almost 75,000 users in Q3 2018.
“Three years after Mirai first appeared, and two years after WannaCry, it shows that we still haven’t solved the problems leveraged in those outbreaks,” said F-Secure principal researcher Jarno Niemela.
“The insecurity of the IoT, for one, is only getting more profound, with more and more devices cropping up all the time and then being co-opted into botnets. And the activity on SMB indicates there are still too many machines out there that remain unpatched.”
The report also revealed a decline in crypto-jacking, suggesting that this had been influenced by lower prices for digital currency and the shutting down of CoinHive earlier this year.
However, ransomware is once again a major threat. Interestingly, the most popular attack vector is RDP (31%), revealing that easily brute-forced passwords are a key security risk. Second most popular was email spam (23%), followed by compromised firmware/middleware.
Poland announced it will launch a cyberspace defense force by 2024 composed of around 2,000 soldiers with a deep knowledge in
The news was reported by AFP, Blaszczak announced that the cyber command unit would start its operations in 2022.
“We’re well aware that in today’s
The Ministry also added that Poland would have enough IT graduates
“Poland’s defense ministry is already looking for talent by partnering with the HackYeah
The post Poland to establish Cyberspace Defence Force by 2024 appeared first on Security Affairs.
Security analytics is a process of collecting data, aggregating, and using tools to analyze the data in order to monitor and identify threats. Depending on the tools being used, this process can incorporate diverse sets of data in detecting patterns and algorithms. Security analytics can also collect data from several points, such as:
- Cloud sources.
- Endpoint devices.
- Network traffic.
- Non-IT contextual data.
- Business applications and software.
- External threat intelligence.
- Access management data.
Adaptive learning techniques have also become available through recent developments that fine-tune detection models depending on experience, learnings, and anomaly detection for security analytics. They can accumulate and analyze data in real time from:
- Geographical location.
- Asset metadata.
- IP context.
- Threat intelligence.
The data collected by the tools can then be used for immediate detection of threats or for future analysis to identify patterns and create better protocols or defenses.
Security Analytics Benefits
Organizations get several key benefits when they use security analytics:
Security analytics can analyze the data from several different sources in order to identify threats and security incidents based on the findings. They do this by analyzing logged data, along with other sources, to pinpoint the correlation between all of them.
One of the most important aspects of security analytics is compliance. Depending on the industry, organizations that manage sensitive data are required by law to comply with regulations for security. By maintaining proper analytics for threat detection, organizations can ensure their compliance with these regulations.
In conducting forensic investigations on security threats and breaches, analytics play a vital role. Since it has collated and gathered data from different sources, personnel can use security analytics to identify what happened and repair any damages that were caused by the breach. This also helps in creating proactive policies to avoid a similar attack or breach.
Use Cases of Security Analytics
There are several use cases for security analytics. This includes detecting threats, improving data visibility, monitoring network traffic, and even analyzing user behavior. Here are more use cases of security analytics:
- Detect suspicious patterns from user behavior analysis.
- Monitor employee activity.
- Detect data exfiltration by hackers.
- Analyze network traffic to identify potential threats.
- Detect insider threats.
- Identify improper account use.
- Hunt for threats.
- Find compromised accounts.
- Demonstrate compliance whenever there is an audit.
And above all, the main goal of any security analytics is to take raw data and turn that into actionable insights to pinpoint and identify potential threats and provide an immediate response. This adds a critical layer of security on the amount of data generated by users, software, applications, networks, and others.
The post Importance of Security Analytics appeared first on .