Monthly Archives: March 2019

AHUKUMIWA MIAKA MIWILI JELA KWA KOSA LA KUHARIBU TAARIFA ZA ALIYEKUA MUAJIRI WAKE



KWA UFUPI: Steffan Needham, Amabae alihudumu kama mshauri wa maswala ya tehama (IT Cosultant) katika kampuni ya Voova ya nchini Uingereza amehukumiwa kifungo cha miaka 2 Jela kwa kosa la kuharibu taarifa za muajiri wake wa wa zamani.
--------------------------------

Kwa mujibu wa Thames Valley Police ya Nchini Uingereza, Mtuhumiwa alifukuzwa kazi na mwaajiri wake na baadae kuharibu taarifa zote muhimu za kampuni hiyo kwa kile kilicho tafsiriwa kama kulipiza kisasi kutokana na kufukuzwa kwake.
Uharibifu wa taarifa umekadiriwa kuigharimu kampuni hiyo kiasi cha Dola laki sita na elsfu Hamsini (US$650,000) ikiwa ni pamoja na kupelekea wafanyakazi kadhaa kupoteza kazi zao.

Mtuhumiwa amehukumiwa chini ya sheria ya nchini Uingereza ya mitandao (Computer Misuse Act)




Aidha, Kampuni husika imeonekana na mapungufu ya kushindwa kuwa na mikakati madhubuti ya kulinda taarifa zake ikiwa ni pamoja na uwekaji wa njia zaidi ya moja (multi-factor authentication) ya uthibitishaji pale mhusika anapotaka kuingia kwenye mifumio yake na kuhakiki ufutwaji wa taarifa katika mfumo unahusisha mtu zaidi ya mmoja.



Ushauri umetolewa kwa makampuni kuchukua tahadhari za dhati katika kulinda taarifa zake ili kujikinga na watumishi wasio wema walio ndani (Malicious/disgruntled insiders) kuweza kuleta maafa hapo baadae.


Wakati huo huo, mahakama Nchini marekani imepatia kibali cha ruhusa kwa Microsoft kuziangusha tovuti takriban 99 zilizo husishwa na uhalifu rubunishi (Phishing Attack).

Tom Burt, kutokea Microsoft ameeleza oparesheni iliyo ziharibu na kuziangusha tovuti hizo 99 ilihusisha makampuni mengine makubwa kama vile Yahoo na mengineyo.

Commando VM: The First of Its Kind Windows Offensive Distribution


For penetration testers looking for a stable and supported Linux testing platform, the industry agrees that Kali is the go-to platform. However, if you’d prefer to use Windows as an operating system, you may have noticed that a worthy platform didn’t exist. As security researchers, every one of us has probably spent hours customizing a Windows working environment at least once and we all use the same tools, utilities, and techniques during customer engagements. Therefore, maintaining a custom environment while keeping all our tool sets up-to-date can be a monotonous chore for all. Recognizing that, we have created a Windows distribution focused on supporting penetration testers and red teamers.

Born from our popular FLARE VM that focuses on reverse engineering and malware analysis, the Complete Mandiant Offensive VM (“Commando VM”) comes with automated scripts to help each of you build your own penetration testing environment and ease the process of VM provisioning and deployment. This blog post aims to discuss the features of Commando VM, installation instructions, and an example use case of the platform. Head over to the Github to find Commando VM.

About Commando VM

Penetration testers commonly use their own variants of Windows machines when assessing Active Directory environments. Commando VM was designed specifically to be the go-to platform for performing these internal penetration tests. The benefits of using a Windows machine include native support for Windows and Active Directory, using your VM as a staging area for C2 frameworks, browsing shares more easily (and interactively), and using tools such as PowerView and BloodHound without having to worry about placing output files on client assets.

Commando VM uses Boxstarter, Chocolatey, and MyGet packages to install all of the software, and delivers many tools and utilities to support penetration testing. This list includes more than 140 tools, including:

With such versatility, Commando VM aims to be the de facto Windows machine for every penetration tester and red teamer. For the blue teamers reading this, don’t worry, we’ve got full blue team support as well! The versatile tool sets included in Commando VM provide blue teams with the tools necessary to audit their networks and improve their detection capabilities. With a library of offensive tools, it makes it easy for blue teams to keep up with offensive tooling and attack trends.


Figure 1: Full blue team support

Installation

Like FLARE VM, we recommend you use Commando VM in a virtual machine. This eases deployment and provides the ability to revert to a clean state prior to each engagement. We assume you have experience setting up and configuring your own virtualized environment. Start by creating a new virtual machine (VM) with these minimum specifications:

  • 60 GB of disk space
  • 2 GB memory

Next, perform a fresh installation of Windows. Commando VM is designed to be installed on Windows 7 Service Pack 1, or Windows 10, with Windows 10 allowing more features to be installed.

Once the Windows installation has completed, we recommend you install your specific VM guest tools (e.g., VMware Tools) to allow additional features such as copy/paste and screen resizing. From this point, all installation steps should be performed within your VM.

  1. Make sure Windows is completely updated with the latest patches using the Windows Update utility. Note: you may have to check for updates again after a restart.
  2. We recommend taking a snapshot of your VM at this point to have a clean instance of Windows before the install.
  3. Navigate to the following URL and download the compressed Commando VM repository onto your VM:
  4. Follow these steps to complete the installation of Commando VM:
    1. Decompress the Commando VM repository to a directory of your choosing.
    2. Start a new session of PowerShell with elevated privileges. Commando VM attempts to install additional software and modify system settings; therefore, escalated privileges are required for installation.
    3. Within PowerShell, change directory to the location where you have decompressed the Commando VM repository.
    4. Change PowerShell’s execution policy to unrestricted by executing the following command and answering “Y” when prompted by PowerShell:
      • Set-ExecutionPolicy unrestricted
    5. Execute the install.ps1 installation script. You will be prompted to enter the current user’s password. Commando VM needs the current user’s password to automatically login after a reboot. Optionally, you can specify the current user’s password by passing the “-password <current_user_password>” at the command line.


Figure 2: Install script running

The rest of the installation process is fully automated. Depending upon your Internet speed the entire installation may take between 2 to 3 hours to finish. The VM will reboot multiple times due to the numerous software installation requirements. Once the installation completes, the PowerShell prompt remains open waiting for you to hit any key before exiting. After completing the installation, you will be presented with the following desktop environment:


Figure 3: Desktop environment after install

At this point it is recommended to reboot the machine to ensure the final configuration changes take effect. After rebooting you will have successfully installed Commando VM! We recommend you power off the VM and then take another snapshot to save a clean VM state to use in future engagements.

Proof of Concept

Commando VM is built with the primary focus of supporting internal engagements. To showcase Commando VMs capabilities, we constructed an example Active Directory deployment. This test environment may be contrived; however, it represents misconfigurations commonly observed by Mandiant’s Red Team in real environments.

We get started with Commando VM by running network scans with Nmap.


Figure 4: Nmap scan using Commando VM

Looking for low hanging fruit, we find a host machine running an interesting web server on TCP port 8080, a port commonly used for administrative purposes. Using Firefox, we can connect to the server via HTTP over TCP port 8080.


Figure 5: Jenkins server running on host

Let’s fire up Burp Suite’s Intruder and try brute-forcing the login. We navigate to our Wordlists directory in the Desktop folder and select an arbitrary password file from within SecLists.


Figure 6: SecLists password file

After configuring Burp’s Intruder and analyzing the responses, we see that the password “admin” grants us access to the Jenkins console. Classic.


Figure 7: Successful brute-force of the Jenkins server

It’s well known that Jenkins servers come installed with a Script Console and run as NT AUTHORITY\SYSTEM on Windows systems by default. We can take advantage of this and gain privileged command execution.


Figure 8: Jenkins Script Console

Now that we have command execution, we have many options for the next step. For now, we will investigate the box and look for sensitive files. Through browsing user directories, we find a password file and a private SSH key.


Figure 9: File containing password

Let’s try and validate these credentials against the Domain Controller using CredNinja.


Figure 10: Valid credentials for a domain user

Excellent, now that we know the credentials are valid, we can run CredNinja again to see what hosts the user might have local administrative permissions on.


Figure 11: Running CredNinja to identify local administrative permissions

It looks like we only have administrative permissions over the previous Jenkins host, 192.168.38.104. Not to worry though, now that we have valid domain credentials, we can begin reconnaissance activities against the domain. By executing runas /netonly /user:windomain.local\niso.sepersky cmd.exe and entering the password, we will have an authenticated command prompt up and running.


Figure 12: cmd.exe running as WINDOMAIN\niso.sepersky

Figure 12 shows that we can successfully list the contents of the SYSVOL file share on the domain controller, confirming our domain access. Now we start up PowerShell and start share hunting with PowerView.


Figure 13: PowerView's Invoke-ShareFinder output

We are also curious about what groups and permissions are available to the user account compromised. Let’s use the Get-DomainUser module of the post-exploitation framework PowerView to retrieve user details from Active Directory. Note that Commando VM uses the “dev” branch of PowerView by default.


Figure 14: Get-DomainUser win

We also want to check for further access using the SSH key we found earlier. Looking at our port scans we identify one host with TCP port 22 open. Let’s use MobaXterm and see if we can SSH into that server.


Figure 15: SSH with MobaXterm

We access the SSH server and also find an easy path to rooting the server. However, we weren’t able to escalate domain privileges with this access. Let’s get back to share hunting, starting with that hidden Software share we saw earlier. Using File Explorer, it’s easy to browse shares within the domain.


Figure 16: Browsing shares in windomain.local

Using the output from PowerView’s Invoke-ShareFinder command, we begin digging through shares and hunting for sensitive information. After going through many files, we finally find a config.ini file with hardcoded credentials.


Figure 17: Identifying cleartext credentials in configuration file

Using CredNinja, we validate these credentials against the domain controller and discover that we have local administrative privileges!


Figure 18: Validating WINDOMAIN\svcaccount credentials

Let’s check group memberships for this user.


Figure 19: Viewing group membership of WINDOMAIN\svcaccount

Lucky us, we’re a member of the “Domain Admins” group!

Final Thoughts

All of the tools used in the demo are installed on the VM by default, as well as many more. For a complete list of tools, and for the install script, please see the Commando VM Github repo. We are looking forward to addressing user feedback, adding more tools and features, and creating many enhancements. We believe this distribution will become the standard tool for penetration testers and look forward to continued improvement and development of the Windows attack platform.

Thoughts on OSSEC Con 2019

Last week I attended my first OSSEC conference. I first blogged about OSSEC in 2007, and wrote other posts about it in the following years.

OSSEC is a host-based intrusion detection and log analysis system with correlation and active response features. It is cross-platform, such that I can run it on my Windows and Linux systems. The moving force behind the conference was a company local to me called Atomicorp.

In brief, I really enjoyed this one-day event. (I had planned to attend the workshop on the second day but my schedule did not cooperate.) The talks were almost uniformly excellent and informative. I even had a chance to talk jiu-jitsu with OSSEC creator Daniel Cid, who despite hurting his leg managed to travel across the country to deliver the keynote.

I'd like to share a few highlights from my notes.

First, I had been worried that OSSEC was in some ways dead. I saw that the Security Onion project had replaced OSSEC with a fork called Wazuh, which I learned is apparently pronounced "wazoo." To my delight, I learned OSSEC is decidedly not dead, and that Wazuh has been suffering stability problems. OSSEC has a lot of interesting development ahead of it, which you can track on their Github repo.

For example, the development roadmap includes eliminating Logstash from the pipeline used by many OSSEC users. OSSEC would feed directly into Elasticsearch. One speaker noted that Logstash has a 1.7 GB memory footprint, which astounded me.

On a related note, the OSSEC team is planning to create a new Web console, with a design goal to have it run in an "AWS t2.micro" instance. The team noted that instance offers 2 GB memory, which doesn't match what AWS says. Perhaps they meant t2.micro and 1 GB memory, or t2.small with 2 GB memory. I think they mean t2.micro with 1 GB RAM, as that is the free tier. Either way, I'm excited to see this later in 2019.

Second, I thought the presentation by security personnel from USA Today offered an interesting insight. One design goal they had for monitoring their Google Cloud Platform (GCP) was to not install OSSEC on every container or on Kubernetes worker nodes. Several times during the conference, speakers noted that the transient nature of cloud infrastructure is directly antithetical to standard OSSEC usage, whereby OSSEC is installed on servers with long uptime and years of service. Instead, USA Today used OSSEC to monitor HTTP logs from the GCP load balancer, logs from Google Kubernetes Engine, and monitored processes by watching output from successive kubectl invocations.

Third, a speaker from Red Hat brought my attention to an aspect of containers that I had not considered. Docker and containers had made software testing and deployment a lot easier for everyone. However, those who provide containers have effectively become Linux distribution maintainers. In other words, who is responsible when a security or configuration vulnerability in a Linux component is discovered? Will the container maintainers be responsive?

Another speaker emphasized the difference between "security of the cloud," offered by cloud providers, and "security in the cloud," which is supposed to be the customer's responsibility. This makes sense from a technical point of view, but I expect that in the long term this differentiation will no longer be tenable from a business or legal point of view.

Customers are not going to have the skills or interest to secure their software in the cloud, as they outsource ever more technical talent to the cloud providers and their infrastructure. I expect cloud providers to continue to develop, acquire, and offer more security services, and accelerate their competition on a "complete security environment."

I look forward to more OSSEC development and future conferences.

KNOW AND PICK YOUR ANDROID SECURITY APP WISELY



IN BRIEF: In recent year, we have seen a tremendous increase of mobile applications across many countries – It is like everyone want to come with a mobile application for many reasons. On the other hand, the rate of fake and malicious mobile applications is rapidly growing posing major security risk to mobile users.
-------------------------------------

 Mobile application developers are now facing threats to customers and application data as automated and sophisticated attacks increasingly target the owners, users and data of mobile applications.

Apart from jeopardizing our privacy from unprotected Application from various application developers, Criminals are also developing mobile applications with malicious intentions putting thousands of users who download them to fall victims of cybercrimes.





It is prudent to secure our mobile devices with security solutions – Sadly, A recent test of anti-malware apps available in Google Play showed that most are not, in fact, worthy of the name and, indeed, the space they take up on the Android device.


Independent testing outfit AV-Comparatives threw the 2,000 most common Android malware samples seen in the wild last year at 250 security (and, as it turns out, also “security”) apps that were available in the Android store in January of this year. Only 80 apps passed the organization’s most basic test – flagging at least 30 percent of the samples as malware while reporting no false positives for some of the most popular and clean apps in Google Play.

Crucially, only 23 apps passed the test with flying colors; that is, they had a 100-percent success rate at detecting the malicious code.

So, what are those purported anti-malware solutions that failed the test up to? You may have guessed it – for the most part, they’ll only foist ads on you. Put differently, instead of keeping you safe from pests that are banking Trojans, ransomware and other threats, many of the fake security apps will apparently only pester you with unwanted ads, all in the name of easy revenue for the developers.


Indeed, some of the products are already detected, at the very least, as “potentially unwanted applications” by at least some reputable mobile security solutions and are likely to be booted by Google from the Android store soon.

In many cases, the apps’ “malware-detecting functionality” resided in their comparing the name of a package for any given app against the AV apps’ respective whitelisted or blacklisted databases. This way of determining if a piece of software is safe or not, can, of course, be trivially easy to defeat by malware creators. Meanwhile for the user, it creates a false sense of security.


The fact that many ad-slinging apps are disguised as security solutions may not be a revelation for you. After all, ESET malware researcher Lukáš Štefanko warned early in 2018 about dozens of apps that professed to protect users from malicious code, but were instead only vehicles for displaying ads.

Meanwhile, a number of products that scored poorly in the test were deemed to be the work of what AV-Comparatives called “hobby developers”. Rather than focus on producing quality security software, these software makers apparently produce a variety of apps that are only designed to generate ad revenue for them. Still other developers “just want to have an Android protection app in their portfolio for publicity reasons”, wrote the AV testing outfit.

In addition, user ratings and/or download numbers are not necessarily something to go by. “Most of the 250 apps we looked at had a review score of 4 or higher on the Google Play Store. Similarly, the number of downloads can only be a very rough guide; a successful scam app may be downloaded many times before it is found to be a scam,” wrote AV-Comparatives, adding that the ‘last updated’ date isn’t a reliable indicator, either.


All told, the results can be understandably disheartening. On the other hand, they’re another reminder of the need to stick to reputable products with proven track records in mobile security.

Thoughts on Cloud Security

Recently I've been reading about cloud security and security with respect to DevOps. I'll say more about the excellent book I'm reading, but I had a moment of déjà vu during one section.

The book described how cloud security is a big change from enterprise security because it relies less on IP-address-centric controls and more on users and groups. The book talked about creating security groups, and adding users to those groups in order to control their access and capabilities.

As I read that passage, it reminded me of a time long ago, in the late 1990s, when I was studying for the MCSE, then called the Microsoft Certified Systems Engineer. I read the book at left, Windows NT Security Handbook, published in 1996 by Tom Sheldon. It described the exact same security process of creating security groups and adding users. This was core to the new NT 4 role based access control (RBAC) implementation.

Now, fast forward a few years, or all the way to today, and consider the security challenges facing the majority of legacy enterprises: securing Windows assets and the data they store and access. How could this wonderful security model, based on decades of experience (from the 1960s and 1970s no less), have failed to work in operational environments?

There are many reasons one could cite, but I think the following are at least worthy of mention.

The systems enforcing the security model are exposed to intruders.

Furthermore:

Intruders are generally able to gain code execution on systems participating in the security model.

Finally:

Intruders have access to the network traffic which partially contains elements of the security model.

From these weaknesses, a large portion of the security countermeasures of the last two decades have been derived as compensating controls and visibility requirements.

The question then becomes:

Does this change with the cloud?

In brief, I believe the answer is largely "yes," thankfully. Generally, the systems upon which the security model is being enforced are not able to access the enforcement mechanism, thanks to the wonders of virtualization.

Should an intruder find a way to escape from their restricted cloud platform and gain hypervisor or management network access, then they find themselves in a situation similar to the average Windows domain network.

This realization puts a heavy burden on the cloud infrastructure operators. They major players are likely able to acquire and apply the expertise and resources to make their infrastructure far more resilient and survivable than their enterprise counterparts.

The weakness will likely be their personnel.

Once the compute and network components are sufficiently robust from externally sourced compromise, then internal threats become the next most cost-effective and return-producing vectors for dedicated intruders.

Is there anything users can do as they hand their compute and data assets to cloud operators?

I suggest four moves.

First, small- to mid-sized cloud infrastructure users will likely have to piggyback or free-ride on the initiatives and influence of the largest cloud customers, who have the clout and hopefully the expertise to hold the cloud operators responsible for the security of everyone's data.

Second, lawmakers may also need improved whistleblower protection for cloud employees who feel threatened by revealing material weaknesses they encounter while doing their jobs.

Third, government regulators will have to ensure no cloud provider assumes a monopoly, or no two providers assume a duopoloy. We may end up with the three major players and a smattering of smaller ones, as is the case with many mature industries.

Fourth, users should use every means at their disposal to select cloud operators not only on their compute features, but on their security and visibility features. The more logging and visibility exposed by the cloud provider, the better. I am excited by new features like the Azure network tap and hope to see equivalent features in other cloud infrastructure.

Remember that security has two main functions: planning/resistance, to try to stop bad things from happening, and detection/respond, to handle the failures that inevitably happen. "Prevention eventually fails" is one of my long-time mantras. We don't want prevention to fail silently in the cloud. We need ways to know that failure is happening so that we can plan and implement new resistance mechanisms, and then validate their effectiveness via detection and response.

Update: I forgot to mention that the material above assumed that the cloud users and operators made no unintentional configuration mistakes. If users or operators introduce exposures or vulnerabilities, then those will be the weaknesses that intruders exploit. We've already seen a lot of this happening and it appears to be the most common problem. Procedures and tools which constantly assess cloud configurations for exposures and vulnerabilities due to misconfiguration or poor practices are a fifth move which all involved should make.

A corollary is that complexity can drive problems. When the cloud infrastructure offers too many knobs to turn, then it's likely the users and operators will believe they are taking one action when in reality they are implementing another.

A Simple Trillion$ Cyber Security Question for the Entire RSA Conference

Folks,

This week, the famous RSA Conference 2019 is underway, where supposedly "The World Talks Security" -


Image Courtesy RSA Conference. Source: https://www.rsaconference.com/

If that's the case, let's talk -  I'd like to respectfully ask the entire RSA Conference just 1 simple cyber security question -

Question: What lies at the very foundation of cyber security and privileged access of not just the RSAs, EMCs, Dells, CyberArks, Gartners, Googles, Amazons, Facebooks and Microsofts of the world, but also at the foundation of virtually all cyber security and cloud companies and at the foundation of over 85% of organizations worldwide?

For those who may not know the answer to this ONE simple cyber security question, the answer's in line 1 here.



For those who may know the answer, and I sincerely hope that most of the world's CIOs, CISOs, Domain Admins, Cyber Security Analysts, Penetration Testers and Ethical Hackers know the answer, here are 4 simple follow-up questions -


  • Q 1.  Should your organization's foundational Active Directory be compromised, what could be its impact?
  • Q 2.  Would you agree that the (unintentional, intentional or coerced) compromise of a single Active Directory privileged user could result in the compromise of your organization's entire foundational Active Directory?
  • Q 3.  If so, then do you know that there is only one correct way to accurately identify/audit privileged users in your organization's foundational Active Directory, and do you possess the capability to correctly be able to do so?
  • Q 4.  If you don't, then how could you possibly know exactly how many privileged users there are in your organization's foundational Active Directory deployment today, and if you don't know so, ...OMG... ?!

You see, if even the world's top cyber security and cloud computing companies themselves don't know the answers to such simple, fundamental Kindergarten-level cyber security questions, how can we expect 85% of the world's organizations to know the answer, AND MORE IMPORTANTLY, what's the point of all this fancy peripheral cyber security talk at such conferences when organizations don't even know how many (hundreds if not thousands of) people have the Keys to their Kingdom(s) ?!


Today Active Directory is at the very heart of Cyber Security and Privileged Access at over 85% of organizations worldwide, and if you can find me even ONE company at the prestigious RSA Conference 2019 that can help organizations accurately identify privileged users/access in 1000s of foundational Active Directory deployments worldwide, you'll have impressed me.


Those who truly understand Windows Security know that organizations can neither adequately secure their foundational Active Directory deployments nor accomplish any of these recent buzzword initiatives like Privileged Access Management, Privileged Account Discovery, Zero-Trust etc. without first being able to accurately identify privileged users in Active Directory.

Best wishes,
Sanjay


PS: Pardon the delay. I've been busy and haven't much time to blog since my last post on Cyber Security 101 for the C-Suite.

PS2: Microsoft, when were you planning to start educating the world about what's actually paramount to their cyber security?