#Infosec : A New Android Trojan #Gustuff targeting banking apps, crypto currency wallets, online payment services & e-commerce sites has been discovered- it spread via SMS & targeting customers of more than 100 banks globally!#Malware #cyberattacks pic.twitter.com/LdXvYBdEJK— YUSUPH KILEO (@YUSUPHKILEO) March 29, 2019
OSSEC is a host-based intrusion detection and log analysis system with correlation and active response features. It is cross-platform, such that I can run it on my Windows and Linux systems. The moving force behind the conference was a company local to me called Atomicorp.
In brief, I really enjoyed this one-day event. (I had planned to attend the workshop on the second day but my schedule did not cooperate.) The talks were almost uniformly excellent and informative. I even had a chance to talk jiu-jitsu with OSSEC creator Daniel Cid, who despite hurting his leg managed to travel across the country to deliver the keynote.
I'd like to share a few highlights from my notes.
First, I had been worried that OSSEC was in some ways dead. I saw that the Security Onion project had replaced OSSEC with a fork called Wazuh, which I learned is apparently pronounced "wazoo." To my delight, I learned OSSEC is decidedly not dead, and that Wazuh has been suffering stability problems. OSSEC has a lot of interesting development ahead of it, which you can track on their Github repo.
For example, the development roadmap includes eliminating Logstash from the pipeline used by many OSSEC users. OSSEC would feed directly into Elasticsearch. One speaker noted that Logstash has a 1.7 GB memory footprint, which astounded me.
On a related note, the OSSEC team is planning to create a new Web console, with a design goal to have it run in an "AWS t2.micro" instance. The team noted that instance offers 2 GB memory, which doesn't match what AWS says. Perhaps they meant t2.micro and 1 GB memory, or t2.small with 2 GB memory. I think they mean t2.micro with 1 GB RAM, as that is the free tier. Either way, I'm excited to see this later in 2019.
Second, I thought the presentation by security personnel from USA Today offered an interesting insight. One design goal they had for monitoring their Google Cloud Platform (GCP) was to not install OSSEC on every container or on Kubernetes worker nodes. Several times during the conference, speakers noted that the transient nature of cloud infrastructure is directly antithetical to standard OSSEC usage, whereby OSSEC is installed on servers with long uptime and years of service. Instead, USA Today used OSSEC to monitor HTTP logs from the GCP load balancer, logs from Google Kubernetes Engine, and monitored processes by watching output from successive kubectl invocations.
Third, a speaker from Red Hat brought my attention to an aspect of containers that I had not considered. Docker and containers had made software testing and deployment a lot easier for everyone. However, those who provide containers have effectively become Linux distribution maintainers. In other words, who is responsible when a security or configuration vulnerability in a Linux component is discovered? Will the container maintainers be responsive?
Another speaker emphasized the difference between "security of the cloud," offered by cloud providers, and "security in the cloud," which is supposed to be the customer's responsibility. This makes sense from a technical point of view, but I expect that in the long term this differentiation will no longer be tenable from a business or legal point of view.
Customers are not going to have the skills or interest to secure their software in the cloud, as they outsource ever more technical talent to the cloud providers and their infrastructure. I expect cloud providers to continue to develop, acquire, and offer more security services, and accelerate their competition on a "complete security environment."
I look forward to more OSSEC development and future conferences.
DMitry Deepmagic information Gathering Tool Kali Linux DMitry (Deepmagic Information Gathering Tool) is an open-source Linux CLI tool developed by James Greig. Coded in C. DMitry is a powerful information gathering tool that aims to gather as much information about a host that is possible. Features include subdomains search, email addresses, uptime information, system ... Read moreDMitry Deepmagic information Gathering Tool Kali Linux
The post DMitry Deepmagic information Gathering Tool Kali Linux appeared first on HackingVision.
How to Enable Facebook White Hat Researcher Setting Facebook has implemented a white hat security testing setting that allows its users to test security over various Facebook services. Facebook will knowingly break its Certificate Pinning mechanism for its users that use white hat settings. Pinning is used to improve the security of a ... Read moreHow to Enable Facebook White Hat Researcher Setting
The post How to Enable Facebook White Hat Researcher Setting appeared first on HackingVision.
#Cybersecurity We have seen the rise of malicious Mobile-apps affecting users! #Infosec We Strongly encourage users to install #MobileApp from trusted source -which are safely configured & protecting our #privacy#Cybercriminals are distributing infected mobile apps! #Cybercrimes pic.twitter.com/JMf4TDakmD— YUSUPH KILEO (@YUSUPHKILEO) January 14, 2018
Kali Linux Micro Hacking Station Raspberry Pi Raspberry Pi is a small pocket-sized low cost computer. Today we will be setting up Kali Linux on Raspberry Pi. We can use Kali Linux on Raspberry Pi to hack WiFi passwords, launch various social engineering attacks, Set up rogue access points and a wide range of ... Read moreKali Linux Micro Hacking Station Raspberry Pi
The book described how cloud security is a big change from enterprise security because it relies less on IP-address-centric controls and more on users and groups. The book talked about creating security groups, and adding users to those groups in order to control their access and capabilities.
As I read that passage, it reminded me of a time long ago, in the late 1990s, when I was studying for the MCSE, then called the Microsoft Certified Systems Engineer. I read the book at left, Windows NT Security Handbook, published in 1996 by Tom Sheldon. It described the exact same security process of creating security groups and adding users. This was core to the new NT 4 role based access control (RBAC) implementation.
Now, fast forward a few years, or all the way to today, and consider the security challenges facing the majority of legacy enterprises: securing Windows assets and the data they store and access. How could this wonderful security model, based on decades of experience (from the 1960s and 1970s no less), have failed to work in operational environments?
There are many reasons one could cite, but I think the following are at least worthy of mention.
The systems enforcing the security model are exposed to intruders.
Intruders are generally able to gain code execution on systems participating in the security model.
Intruders have access to the network traffic which partially contains elements of the security model.
From these weaknesses, a large portion of the security countermeasures of the last two decades have been derived as compensating controls and visibility requirements.
The question then becomes:
Does this change with the cloud?
In brief, I believe the answer is largely "yes," thankfully. Generally, the systems upon which the security model is being enforced are not able to access the enforcement mechanism, thanks to the wonders of virtualization.
Should an intruder find a way to escape from their restricted cloud platform and gain hypervisor or management network access, then they find themselves in a situation similar to the average Windows domain network.
This realization puts a heavy burden on the cloud infrastructure operators. They major players are likely able to acquire and apply the expertise and resources to make their infrastructure far more resilient and survivable than their enterprise counterparts.
The weakness will likely be their personnel.
Once the compute and network components are sufficiently robust from externally sourced compromise, then internal threats become the next most cost-effective and return-producing vectors for dedicated intruders.
Is there anything users can do as they hand their compute and data assets to cloud operators?
I suggest four moves.
First, small- to mid-sized cloud infrastructure users will likely have to piggyback or free-ride on the initiatives and influence of the largest cloud customers, who have the clout and hopefully the expertise to hold the cloud operators responsible for the security of everyone's data.
Second, lawmakers may also need improved whistleblower protection for cloud employees who feel threatened by revealing material weaknesses they encounter while doing their jobs.
Third, government regulators will have to ensure no cloud provider assumes a monopoly, or no two providers assume a duopoloy. We may end up with the three major players and a smattering of smaller ones, as is the case with many mature industries.
Fourth, users should use every means at their disposal to select cloud operators not only on their compute features, but on their security and visibility features. The more logging and visibility exposed by the cloud provider, the better. I am excited by new features like the Azure network tap and hope to see equivalent features in other cloud infrastructure.
Remember that security has two main functions: planning/resistance, to try to stop bad things from happening, and detection/respond, to handle the failures that inevitably happen. "Prevention eventually fails" is one of my long-time mantras. We don't want prevention to fail silently in the cloud. We need ways to know that failure is happening so that we can plan and implement new resistance mechanisms, and then validate their effectiveness via detection and response.
Update: I forgot to mention that the material above assumed that the cloud users and operators made no unintentional configuration mistakes. If users or operators introduce exposures or vulnerabilities, then those will be the weaknesses that intruders exploit. We've already seen a lot of this happening and it appears to be the most common problem. Procedures and tools which constantly assess cloud configurations for exposures and vulnerabilities due to misconfiguration or poor practices are a fifth move which all involved should make.
A corollary is that complexity can drive problems. When the cloud infrastructure offers too many knobs to turn, then it's likely the users and operators will believe they are taking one action when in reality they are implementing another.
Cloud Security should not be an afterthought
It is essential for security to be baked into a new cloud services design, requirements determination, and in the procurement process. In particular, defining and documenting the areas of security responsibility with the intended cloud service provider.
Cloud does not absolve the business of their security responsibilities
All cloud service models, whether the standard models of Infrastructure as a Service (IaaS), Platform as a Service (PaaS) or Software as a Service (SaaS), always involve three areas of security responsibilities to define and document:
- Cloud Service Provider Owned
- Business Owned
- Shared (Cloud Service Provider & Business)
Regardless of the cloud model, data is always the responsibility of the business.
- Cloud Security Alliance
- PCI SSC Cloud Computing Guidelines
- NCSC Cloud Security Guidance
- Microsoft O365 Security and Compliance Blueprint UK
MSFvenom Payload Creator (MSFPC v1.4.5) MSFvenom Payload Creator (MSFPC) is a wrapper to generate multiple types of payloads, based on the user’s choice. The idea is to be as simple as possible (only requiring one input) to produce their payload. In this tutorial, you will learn how to create MSFvenom payloads using MSF Payload ... Read moreCreate Metasploit Payload in Kali Linux MSFvenom Payload Creator
The post Create Metasploit Payload in Kali Linux MSFvenom Payload Creator appeared first on HackingVision.
What can we learn from the major data breaches of 2018?
2018 was a major year for cybersecurity. With the introduction of GDPR, the public’s awareness of their cyber identities has vastly increased – and the threat of vulnerability along with it. The Information Commissioner’s Office received an increased number of complaints this year and the news was filled with reports of multi-national and multi-millionaire businesses suffering dramatic breaches at the hand of cybercriminals.
2018 Data Breaches
Notable breaches last year include:
5. British Airways
The card details of 380,000 customers were left vulnerable after a hack affected bookings on BA’s website and app. The company insists that no customer’s card details have been used illegally but they are expected to suffer a major loss of money in revenue and fines as a result of the attack.
Almost 2 million users had their personal data, including billing information and email addresses accessed through an API by an international group of hackers last August.
A vulnerability in the app’s cloud computing account meant that the names and contact details of 21 million users were affected on Timehop. The company assured users that memories were only shared on the day and deleted after, meaning that the hackers were not able to access their Facebook and Twitter history.
2. Facebook & Cambridge Analytica
One of the most sensationalised news stories of the last year, Facebook suffered a string of scandals after it was released that analytics firm Cambridge Analytica had used the Facebook profile data of 87 million users in an attempt to influence President Trump’s campaign and potentially aid the Vote Leave campaign in the UK-EU referendum.
After a “malicious third party” accessed Quora’s system, the account information, including passwords, names and email addresses, of 100 million users was compromised. The breach was discovered in November 2018.
As the UK made the switch from the Data Protection Act to GDPR, businesses and internet users across the country suddenly became more aware of their internet identities and their rights pertaining to how businesses handled their information.
With the responsibility now firmly on the business to protect the data of UK citizens, companies are expected to keep a much higher standard of security in order to protect all personal data of their clients.
How many complaints to the ICO?
Elizabeth Denham, the UK’s Information Commissioner, said that the year 2017-18 was ‘one of increasing activity and challenging actions, some unexpected, for the office’.
This is shown in an increase in data protection complaints by 15%, as well as an increase in self-reported breaches by 30%. Since this is the first year of GDPR, it is expected that self-reported breaches have increased as businesses work to insure themselves against much higher fines for putting off their announcement.
The ICO also reports 19 criminal prosecutions and 18 convictions last year and fines totalling £1.29 million for serious security failures under the Data Protection Act 1998. The office has assured that they don’t intend to make an example of firms reporting data breaches in the early period of GDPR but as time goes on, leniency is likely to fade as businesses settle into the higher standards.
What does it mean for SMEs?
With 36% of SMEs having no cybersecurity plan, the general consensus is that they make for unpopular targets. However, with the GDPR, the responsibility is on the business to protect their data so being vulnerable could result in business-destroying costs. Considering the cost to businesses could total the higher of 2% of annual turnover or €10 million, data protection is of paramount importance to small businesses.
How exposed are we in the UK?
At 31%, our vulnerability rating is higher than the Netherlands, Germany, Estonia (30%) and Finland (29%), but the UK is a more likely target for cybercriminals looking to exploit high tech and financial services industries, which are some of the most vulnerable across Great Britain.
Despite a higher level of vulnerability, the UK has one of the largest cyber security talent pools, showing there is time and manpower being dedicated to the protection of our data online.
MITRE ATT&CK Framework: Keep your friends close, but your enemies even closer
So, how can you get started and use the framework? Nearly every organisation is interested in using MITRE ATT&CK, but they have different views on how it should be adopted based the capabilities of their security operations. We need to make sure that the MITRE ATT&CK framework doesn’t become another source of threat data that is not fully utilised, or a passing fad, or a tool that only the most sophisticated security operations teams can apply effectively. To avoid this fate, we must look at ways to map the framework to stages of maturity so that every organisation can derive value. Here are a few examples of how to use the framework with appropriate use cases as maturity levels evolve.
Stage 1: Reference and Data Enrichment
The MITRE ATT&CK framework contains a tremendous amount of data that could potentially be valuable to any organisation. The MITRE ATT&CK Navigator provides a matrix view of all the techniques so that security analysts can see what techniques an adversary might apply to infiltrate their organisation. To more easily consume this data, a good place to start is with tools that make that data easy to access and share across teams. This may be through an enrichment tool or a platform with a centralised threat library that allows a user to aggregate the data and easily search for adversary profiles to get answers to questions like: Who is this adversary? What techniques and tactics are they using? What mitigations can I apply? Security analysts can use the data from the framework as a detailed source of reference to manually enrich their analysis of events and alerts, inform their investigations and determine the best actions to take depending on relevance and sightings within their environment.
Stage 2: Indicator or Event-driven Response
Building on the ability to reference and understand MITRE ATT&CK data, in Stage 2 security teams incorporate capabilities in the platform within their operational workflows that allow them to apply a degree of action to the data more effectively. For example, with the data ingested in a centralised threat library, they can build relationships between that data automatically without having to form those relationships manually. By automatically correlating events and associated indicators from inside the environment (from sources including the security information and event management (SIEM) system, log management repository, case management systems and security infrastructure) with indicators from the MITRE ATT&CK framework, they gain the context to immediately understand the who, what, where, when, why and how of an attack. They can then automatically prioritise based on relevance to their organisation and determine high-risk indicators of compromise (IOCs) to investigate within their environment. With the ability to use ATT&CK data in a more simple and automated manner, security teams can investigate and respond to incidents and push threat intelligence to sensors for detection and hunt for threats more effectively.
Stage 3: Proactive Tactic or Technique-driven Threat Hunting
At this stage, threat hunting teams can pivot from searching for indicators to taking advantage of the full breadth of ATT&CK data. Instead of narrowly focusing on more targeted pieces of data that appear to be suspicious, threat hunting teams can use the platform to start from a higher vantage point with information on adversaries and associated TTPs. They can take a proactive approach, beginning with the organisation’s risk profile, mapping those risks to specific adversaries and their tactics, drilling down to techniques those adversaries are using and then investigating if related data have been identified in the environment. For example, they may be concerned with APT28 and can quickly answer questions including: What techniques do they apply? Have I seen potential IOCs or possible related system events in my organisation? Are my endpoint technologies detecting those techniques?
The success of MITRE ATT&CK will depend on how easy it is to apply effectively. With an understanding of maturity levels and use cases, and the ability for technologies to support security operations teams at whatever stage they are in, organisations will be able to use the framework to their advantage. As their desire and capabilities to use the data evolve and grow, they’ll be able to dig deeper into the MITRE ATT&CK framework and gain even greater value.
This week, the famous RSA Conference 2019 is underway, where supposedly "The World Talks Security" -
If that's the case, let's talk - I'd like to respectfully ask the entire RSA Conference just 1 simple cyber security question -
Question: What lies at the very foundation of cyber security and privileged access of not just the RSAs, EMCs, Dells, CyberArks, Gartners, Googles, Amazons, Facebooks and Microsofts of the world, but also at the foundation of virtually all cyber security and cloud companies and at the foundation of over 85% of organizations worldwide?
For those who may not know the answer to this ONE simple cyber security question, the answer's in line 1 here.
For those who may know the answer, and I sincerely hope that most of the world's CIOs, CISOs, Domain Admins, Cyber Security Analysts, Penetration Testers and Ethical Hackers know the answer, here are 4 simple follow-up questions -
- Q 1. Should your organization's foundational Active Directory be compromised, what could be its impact?
- Q 2. Would you agree that the (unintentional, intentional or coerced) compromise of a single Active Directory privileged user could result in the compromise of your organization's entire foundational Active Directory?
- Q 3. If so, then do you know that there is only one correct way to accurately identify/audit privileged users in your organization's foundational Active Directory, and do you possess the capability to correctly be able to do so?
- Q 4. If you don't, then how could you possibly know exactly how many privileged users there are in your organization's foundational Active Directory deployment today, and if you don't know so, ...OMG... ?!
You see, if even the world's top cyber security and cloud computing companies themselves don't know the answers to such simple, fundamental Kindergarten-level cyber security questions, how can we expect 85% of the world's organizations to know the answer, AND MORE IMPORTANTLY, what's the point of all this fancy peripheral cyber security talk at such conferences when organizations don't even know how many (hundreds if not thousands of) people have the Keys to their Kingdom(s) ?!
Today Active Directory is at the very heart of Cyber Security and Privileged Access at over 85% of organizations worldwide, and if you can find me even ONE company at the prestigious RSA Conference 2019 that can help organizations accurately identify privileged users/access in 1000s of foundational Active Directory deployments worldwide, you'll have impressed me.
Those who truly understand Windows Security know that organizations can neither adequately secure their foundational Active Directory deployments nor accomplish any of these recent buzzword initiatives like Privileged Access Management, Privileged Account Discovery, Zero-Trust etc. without first being able to accurately identify privileged users in Active Directory.
PS: Pardon the delay. I've been busy and haven't much time to blog since my last post on Cyber Security 101 for the C-Suite.
PS2: Microsoft, when were you planning to start educating the world about what's actually paramount to their cyber security?
Easy Hacked: This is how easy it is to get hacked Easy Hacked: Vice News talks about how easy it is to get hacked. VICE News went to Moscow to see the country’s expert hackers in action. “If someone wants to hack you, they’re gonna be able to” former NSA hacker Patrick Wardle told VICE ... Read moreEasy Hacked: How easy it is to get hacked – Vice News – HackingVision
The post Easy Hacked: How easy it is to get hacked – Vice News – HackingVision appeared first on HackingVision.
What is an IT auditor?
An IT auditor is responsible for analyzing and assessing a company’s technological infrastructure to ensure processes and systems run accurately and efficiently, while remaining secure and meeting compliance regulations. An IT auditor also identifies any IT issues that fall under the audit, specifically those related to security and risk management. If issues are identified, IT auditors are responsible for communicating their findings to others in the organization and offering solutions to improve or change processes and systems to ensure security and compliance.
Google Dorks List 2019 SQLi Dorks Google Dorks List 2019, Google Dorks List, Find SQL Injectable Websites, Hack Websites using Google Dorks, Google Dorks List SQL Injection. Google Dorks List 2019 is a list of dorks to find SQL injectable websites. A Google dork query, sometimes just referred to as a dork, is a search ... Read moreGoogle Dorks List 2019 SQLi Dorks – HackingVision
Free eBooks list of free Python programming eBooks to learn Python programming. Download eBooks in PDF EPUB 2019 Python eBooks. List curated by Hackingvision.com Disclaimer: The contributor(s) cannot be held responsible for any misuse of the data. This repository is just a collection of URLs to download eBooks for free. Download the eBooks at your ... Read more25 Free eBooks to learn Python 2019 – HackingVision
The post 25 Free eBooks to learn Python 2019 – HackingVision appeared first on HackingVision.
SQLiv – Massive SQL injection scanner SQLiv Massive SQL injection scanner Features multiple domain scanning with SQL injection dork by Bing, Google, or Yahoo targetted scanning by providing specific domain (with crawling) reverse domain scanning both SQLi scanning and domain info checking are done in multiprocessing so the script is super fast at scanning many ... Read moreSQLiv – Massive SQL injection scanner