Monthly Archives: June 2018

WHAT is the ONE Essential Cyber Security Capability WITHOUT which NOT a single Active Directory object or domain can be adequately secured?


Hello again. Today onwards, as I had promised, it is finally TIME for us to help SAFEGUARD Microsoft's Global Ecosystem.

Before I share how we uniquely do so, or answer this paramount question, or ask more such ones, I thought I'd ask likely the most important question that today DIRECTLY impacts the foundational cyber security of 1000s of organizations worldwide.

Here It Is -
What Is the 1 Essential Cyber Security Capability Without Which NOT a single Active Directory object, domain, forest or deployment can be adequately secured?

A Hint

I'll give you a hint. It controls exactly who is denied and who is granted access to literally everything within Active Directory.

In fact, it comes into play every time anyone accesses anything in any Active Directory domain in any organization worldwide.

Make No Mistake

Make no mistake about it - one simply CANNOT adequately protect anything in any Active Directory WITHOUT possessing this ONE capability, and thus one simply cannot protect the very foundation of an organization's cyber security without possessing this ONE paramount cyber security capability. It unequivocally is as remarkably simple, elemental and fundamental as this.

Only 2 Kinds of Organizations

Thus, today there are only two kinds of organizations worldwide - those that possess this paramount cyber security capability, and those that don't. Those that don't possess this essential capability do not have the means to, and thus cannot adequately protect, their foundational Active Directory deployments, and thus by logic are provably and demonstrably insecure.

If you know the answer, feel free to leave a comment below.
I'll answer this question right here, likely on July 04, 2018.


Facial Recognition And Future Scenarios

Will facial recognition technologies mean we will be permanently under surveillance in the future? Should schools and colleges be teaching children how this technology works? Or should we just ignore this technology as if it wasn’t happening? Are there any alternatives?

Alarming! : Windows Update Automatically Downloaded and Installed an Untrusted Self-Signed Kernel-mode Lenovo Driver on New Surface Device


Given what it is I do, I don't squander a minute of precious time, unless something is very important, and this is very important.

Let me explain why this is so alarming, concerning and so important to cyber security, and why at many organizations (e.g. U.S. Govt., Paramount Defenses etc.), this could've either possibly resulted in, or in itself, be considered a cyber security breach.

Disclaimer: I'm not making any value judgment about Lenovo ; I'm merely basing this on what's already been said.

As you know, Microsoft's been brazenly leaving billions of people and thousands of organizations worldwide with no real choice but to upgrade to their latest operating system, Windows 10, which albeit is far from perfect, is much better than Windows Vista, Windows 8 etc., even though Windows 10's default settings could be considered an egregious affront to Privacy.

Consequently, at Paramount Defenses, we too felt that perhaps it was time to consider moving on to Windows 10, so we too figured we'd refresh our workforce's PCs. Now, of the major choices available from amongst several reputable PC vendors out there, Microsoft's Surface was one of the top trustworthy contenders, considering that the entirety of the hardware and software was from the same vendor (, and one that was decently trustworthy (considering that most of the world is running their operating system,)) and that there seemed to be no* pre-installed drivers or software that may have been written in China, Russia etc.

Side-note: Based on information available in the public domain, in all likelihood, software written in / maintained from within Russia, may still likely be running as System on Domain Controllers within the U.S. Government.

In particular, regardless of its respected heritage, for us, Lenovo wasn't  an option, since it is partly owned by the Chinese Govt.

So we decided to consider evaluating Microsoft Surface devices and thus purchased a couple of brand-new Microsoft Surface devices from our local Microsoft Store for an initial PoC, and I decided to personally test-drive one of them -

Microsoft Surface

The very first thing we did after unsealing them, walking through the initial setup and locking down Windows 10's unacceptable default privacy settings, was to connect it to the Internet over a secure channel, and perform a Windows Update.

I should mention that there was no other device attached to this Microsoft Surface, except for a Microsoft Signature Type Cover, and in particular there were no mice of any kind, attached to this new Microsoft surface device, whether via USB or Bluetooth.

Now, you're not going to believe what happened within minutes of having clicked the Check for Updates button!

Windows Update
Downloaded and Installed an Untrusted
Self-Signed Lenovo Device Driver on Microsoft Surface! -

Within minutes, Windows Update automatically downloaded and had installed, amongst other packages (notably Surface Firmware,) an untrusted self-signed Kernel-mode device-driver, purportedly Lenovo - Keyboard, Other hardware - Lenovo Optical Mouse (HID), on this brand-new Microsoft Surface device, i.e. one signed with an untrusted WDK Test Certificate!

Here's a snapshot of Windows Update indicating that it had successfully downloaded and installed a Lenovo driver on this Surface device, and it specifically states "Lenovo - Keyboard, Other hardware - Lenovo Optical Mouse (HID)" -

We couldn't quite believe this.

How could this be possible? i.e. how could a Lenovo driver have been installed on a Microsoft  Surface device?

So we checked the Windows Update Log, and sure enough, as seen in the snapshot below, the Windows Update Log too confirmed that Windows Update had just downloaded and installed a Lenovo driver -

We wondered if there might have been any Lenovo hardware components installed on the Surface so we checked the Device Manager, and we could not find a single device that seemed to indicate the presence of any Lenovo hardware. (Later, we even took it back to the Microsoft Store, and their skilled tech personnel confirmed the same finding i.e. no Lenovo hardware on it.)

Specifically, as you can see below, we again checked the Device Manager, this time to see if it might indicate the presence of any Lenovo HID, such as a Lenovo Optical Mouse, and as you can see in the snapshot below, the only two Mice and other pointing devices installed on the system were from Microsoft - i.e. no Lenovo mouse presence indicated by Device Manager -

Next, we performed a keyword search of the Registry, and came across a suspicious Driver Package, as seen below -

It seemed suspicious to us because as can be seen in the snapshot above, all of the other legitimate driver package keys in the Registry had (as they should) three child sub-keys i.e. Configurations, Descriptors and Strings, but this specific one only had one subkey titled Properties, and when we tried to open it, we received an Access Denied message!

As you can see above, it seemed to indicate that the provider was Lenovo and that the INF file name was phidmou.inf, and the OEM path was "C:\Windows\SoftwareDistribution\Download\Install", so we looked at the file system but this path didn't seem to exist on the file-system. So we performed a simple file-system search "dir /s phidmou.*" and as seen in the snapshot below, we found one instance of such a file, located in C:\Windows\System32\DriverStore\FileRepository\.

Here's that exact location on the file-system, and as evidenced by the Created date and time for that folder, one can see that this folder (and thus all of its contents), were created on April 01, 2018 at around 1:50 am, which is just around the time the Windows Update log too confirmed that it had installed the Lenovo Driver -

When we opened that location, we found thirteen items, including six drivers -

Next, we checked the Digital Signature on one of the drivers, PELMOUSE.SYS, and we found that it was signed using a self-signed test Windows Driver certificate, i.e. the .sys files were SELF-SIGNED by a WDKTestCert and their digital signatures were NOT OK, in that they terminated in a root certificate that is not trusted by the trust provider -

Finally, when we clicked on the View Certificate button, as can be seen below, we could see that this driver was in fact merely signed by a test certificate, which is only supposed to be used for testing purposes during the creation and development of Kernel-mode drivers. Quoting from Microsoft's documentation on Driver Testing "However, eventually it will become necessary to test-sign your driver during its development, and ultimately release-sign your driver before publishing it to users." -

Clearly, the certificate seen above is NOT one that is intended to be used for release signing, yet, here we have a Kernel-mode driver downloaded by Windows Update and installed on a brand new Microsoft surface, and all its signed by is a test certificate, and who knows who wrote this driver!

Again, per Microsoft's guidelines on driver signing, which can also be found here, "After completing test signing and verifying that the driver is ready for release, the driver package has to be release signed", and AFAIK, release signing not only requires the signer to obtain and use a code-signing certificate from a code-signing CA, it also requires a cross cert issued by Microsoft.

If that is indeed the case, then a Kernel-mode driver that is not signed with a valid code-signing certificate, and one whose digital signature does not contain Microsoft's cross cert, should not even be accepted into the Windows Update catalog.

It is thus hard to believe that a Windows Kernel-Mode Driver that is merely self-signed using a test certificate would even make it into the Windows Update catalog, and further it seems that in this case, not only did it make it in, it was downloaded, and in fact successfully installed onto a system, which clearly seems highly suspicious, and is fact alarming and deeply-concerning!

How could this be? How could Windows Update (a trusted system process of the operating system), which we all (have no choice but to) trust (and have to do so blindly and completely) have itself installed an untrusted self-signed Lenovo driver (i.e. code running in Kernel-Mode) on a Microsoft Surface device?

Frankly, since this piece of software was signed using a self-signed test cert, who's to say this was even a real Lenovo driver? It could very well be some malicious code purporting to be a Lenovo driver. Or, there is also the remote possibility that it could be a legitimate Lenovo driver, that is self-signed, but if that is the case, its installation should not have been allowed to succeed.

Unacceptable and Deeply Concerning

To us, this is unacceptable, alarming and deeply concerning, and here's why.

We just had, on a device we consider trustworthy (, and could possibly have engaged in business on,) procured from a vendor we consider trustworthy (considering that the entire world's cyber security ultimately depends on them), an unknown, unsigned piece of software of Chinese origin that is now running in Kernel-mode, installed on the device, by this device's vendor's (i.e. Microsoft's) own product (Windows operating system's) update program!

We have not had an opportunity to analyze this code, but if it is indeed malicious in any way, in effect, it would've, unbeknownst to us and for no fault of ours, granted System-level control over a trusted device within our perimeter, to some entity in China.

How much damage could that have caused? Well, suffice it to say that, for they who know Windows Security well, if this was indeed malicious, it would've been sufficient to potentially compromise any organization within which this potentially suspect and malicious package may have been auto-installed by Windows update. (I've elaborated a bit on this below.)

In the simplest scenario, if a company's Domain Admins had been using this device, it would've been Game Over right there!

This leads me to the next question - we can't help but wonder how many such identical Surface devices exist out there today, perhaps at 1000s of organizations, on which this suspicious unsigned Lenovo driver may have been downloaded and installed?

This also leads me to another very important question - Just how much trust can we, the world, impose in Windows Update?

In our case, it just so happened to be, that we happened to be in front of this device during this Windows update process, and that's how we noticed this, and by the way, after it was done, it gave the familiar Your device is upto date message.

Speaking which, here's another equally important question - For all organizations that are using Windows Surface, and may be using it for mission-critical or sensitive purposes (e.g. AD administration), what is the guarantee that this won't happen again?

I ask because if you understand cyber security, then you know, that it ONLY takes ONE instance of ONE malicious piece of software to be installed on a system, to compromise the security of that system, and if that system was a highly-trusted internal system (e.g. that machine's domain computer account had the "Trusted for Unconstrained Delegation" bit set), then this could very likely also aid perpetrators in ultimately gaining complete command and control of the entire IT infrastructure. As I have already alluded to above, if by chance the target/compromised computer was one that was being used by an Active Directory Privileged User, then, it would be tantamount to Game Over right then and there!

Think about it - this could have happened at any organization, from say the U.S. Government to the British Government, or from say a Goldman Sachs to a Palantir, or say from a stock-exchange to an airline, or say at a clandestine national security agency to say at a nuclear reactor, or even Microsoft itself. In short, for absolutely no fault of theirs, an organization could potentially have been breached by a likely malicious piece of software that the operating system's own update utility had downloaded and installed on the System, and in 99% of situations, because hardly anyone checks what gets installed by Windows Update (now that we have to download and install a whopping 600MB patch every Tuesday), this would likely have gone unnoticed!

Again, to be perfectly clear, I'm not saying that a provably malicious piece of software was in fact downloaded and installed on a Microsoft Surface device by Windows Update. What I'm saying is that a highly suspicious piece of software, one that was built and intended to run in Kernel-mode and yet was merely signed with a test certificate, somehow was automatically downloaded and installed on a Microsoft Surface device, and that to us is deeply concerning, because in essence, if this could happen, then even at organizations that may be spending millions on cyber security, a single such piece of software quietly making its way in through such a trusted channel, could possibly instantly render their entire multi-million dollar cyber security apparatus useless, and jeopardize the security of the entire organization, and this could happen at thousands of organizations worldwide.

With full respect to Microsoft and Mr. Nadella, this is deeply concerning and unacceptable, and I'd like some assurance, as I'm sure would 1000s of other CEOs and CISOs, that this will never happen again, on any Surface device, in any organization.

In our case, this was very important, because had we put that brand new Surface device that we procured from none other than the Microsoft Store, into operation (even it we had re-imaged it with an ultra-secure locked-down internal image), from minute one, post the initial Windows update, we would likely have had a potentially compromised device running within our internal network, and it could perhaps have led to us being breached.

If I Were Microsoft, I'd Send a Plane

Dear Microsoft, we immediately quarantined that Microsoft Surface device, and we have it in our possession.

If I were you, I'd send a plane to get it picked up ASAP, so you can thoroughly investigate every little aspect of this to figure out how this possibly happened, and get to the bottom of it! (Petty process note: The Microsoft Store let us keep the device for a bit longer, but will not let us return the device past June 24, and the only reason we've kept it, is in case you'd want to analyze it.)

Here's why. At the very least, if I were still at Microsoft, and in charge of Cyber Security -
  1. I'd want to know how an untrusted Kernel-mode device driver made it into the Windows Catalog
  2. I'd want to know why a Microsoft Surface device downloaded a purportedly Lenovo driver
  3. I'd want to know how Windows 10 permitted and in fact itself installed an untrusted driver
  4. I'd want to know exactly which SKUs of Microsoft Surface this may have happened on
  5. I'd want to know exactly how many such Microsoft Surface devices out there may have downloaded this package 

Further, and as such, considering that Microsoft Corp itself may easily have thousands of Surface devices being used within Microsoft itself, if I were still with Microsoft CorpSec, I'd certainly want to know how many of their own Surface devices may have automatically downloaded and installed this highly suspicious piece of untrusted self-signed software.

In short, Microsoft, if you care as deeply about cyber security as you say you do, and by that I'm referring to what Mr. Nadella, the CEO of Microsoft, recently said (see video below: 0:40 - 0:44) and I quote "we spend over a billion dollars of R&D each year, in building security into our mainstream products", then you'll want to get to the bottom of this, because other than the Cloud, what else could be a more mainstream product for Microsoft today than, Microsoft Windows and Microsoft Surface ?! -

Also, speaking of Microsoft's ecosystem, it indeed is time to help safeguard Microsoft's global ecosystem. (But I digress,)

In Conclusion

Folks, the only reason I decided to publicly share this is because I care deeply about cyber security, and I believe that this could potentially have impacted the foundational cyber security of any, and potentially, of thousands of organizations worldwide.

Hopefully, as you'll agree, a trusted component (i.e. Windows Update) of an operating system that virtually the whole world will soon be running on (i.e. Windows 10), should not be downloading and installing a piece of software that runs in Kernel-mode, when that piece of software isn't even digitally signed by a valid digital certificate, because if that piece of software happened to be malicious, then in doing so, it could likely, automatically, and for no fault of its users, instantly compromise the cyber security of possibly thousands of organizations worldwide. This is really as simple, as fundamental and as concerning, as that. 

All in all, the Microsoft Surface is an incredible device, and because, like Apple's computers, the entire hardware and software is in control of a single vendor, Microsoft has a huge opportunity to deliver a trustworthy computing device to the world, and we'd love to embrace it. Thus, it is vital for Microsoft to ensure that its other components (e.g. Update) do not let the security of its mainstream products down, because per the Principle of Weakest Link, "a system is only as secure as is its weakest link."

By the way, I happen to be former Microsoft Program Manager for Active Directory Security, and I care deeply for Microsoft.

For those may not know what Active Directory Security is (i.e. most CEOs, a few CISOs, and most employees and citizens,) suffice it to say that global security may depend on Active Directory Security, and thus may be a matter of paramount defenses.

Most respectfully,

PS: Full Disclosure: I had also immediately brought this matter to the attention of the Microsoft Store. They escalated it to Tier-3 support (based out of New Delhi, India), who then asked me to use the Windows Feedback utility to share the relevant evidence with Microsoft, which I immediately and dutifully did, but/and I never heard back from anyone at Microsoft in this regard again.

PS2: Another small request to Microsoft - Dear Microsoft, while at it, could you please also educate your global customer base about the paramount importance of Active Directory Effective Permissions, which is the ONE capability without which not a single object in any Active Directory deployment can be adequately secured! Considering that Active Directory is the foundation of cyber security of over 85% of all organizations worldwide, this is important. Over the last few years, we've had almost 10,000 organizations from 150+ countries knock at our doors, and virtually none of them seem to know this most basic and cardinal fact of Windows Security. I couldn't begin to tell you how shocking it is for us to learn that most Domain Admins and many CISOs out there don't have a clue. Can you imagine just how insecure and vulnerable an organization whose Domain Admins don't even know what Active Directory Effective Permissions are, let alone possessing this paramount capability, could be today?

Bring Your Own Land (BYOL) – A Novel Red Teaming Technique


One of most significant recent developments in sophisticated offensive operations is the use of “Living off the Land” (LotL) techniques by attackers. These techniques leverage legitimate tools present on the system, such as the PowerShell scripting language, in order to execute attacks. The popularity of PowerShell as an offensive tool culminated in the development of entire Red Team frameworks based around it, such as Empire and PowerSploit. In addition, the execution of PowerShell can be obfuscated through the use of tools such as “Invoke-Obfuscation”. In response, defenders have developed detections for the malicious use of legitimate applications. These detections include suspicious parent/child process relationships, suspicious process command line arguments, and even deobfuscation of malicious PowerShell scripts through the use of Script Block Logging.

In this blog post, I will discuss an alternative to current LotL techniques. With the most current build of Cobalt Strike (version 3.11), it is now possible to execute .NET assemblies entirely within memory by using the “execute-assembly” command. By developing custom C#-based assemblies, attackers no longer need to rely on the tools present on the target system; they can instead write and deliver their own tools, a technique I call Bring Your Own Land (BYOL). I will demonstrate this technique through the use of a custom .NET assembly that replicates some of the functionality of the PowerSploit project. I will also discuss how detections can be developed around BYOL techniques.


At DerbyCon last year, I had the pleasure of meeting Raphael Mudge, the developer behind the Cobalt Strike Remote Access Tool (RAT). During our discussion, I mentioned how useful it would be to be able to load .NET assemblies into Cobalt Strike beacons, similar to how PowerShell scripts can be imported using the “powershell-import” command. During a previous Red Team engagement I had been involved in, the use of PowerShell was precluded by the Endpoint Detection and Response (EDR) agent present on host machines within the target environment. This was a significant issue, as much of my red teaming methodology at the time was based on the use of various PowerShell scripts. For example, the “Get-NetUser” cmdlet of the PowerView script allows for the enumeration of domain users within an Active Directory environment. While the use of PowerShell was not an option, I found that no application-based whitelisting was occurring on hosts within the target’s environment. Therefore, I started converting the PowerShell functionality into C# code, compiling assemblies locally as Portable Executable (PE) files, and then uploading the PE files onto target machines and executing them. This tactic was successful, and I was able to use these custom assemblies to elevate privileges up to Domain Admin within the target environment.

Raphael agreed that the ability to load these assemblies in-memory using a Cobalt Strike beacon would be useful, and about 8 months later this functionality was incorporated into Cobalt Strike version 3.11 via the “execute-assembly” command.

“execute-assembly” Demonstration

For this demonstration, a custom C# .NET assembly named “get-users” was used. This assembly replicated some of the functionality of the PowerView “Get-NetUser” cmdlet; it queried the Domain Controller of the specified domain for a list of all current domain accounts. Information obtained included the “SAMAccountName”, “UserGivenName”, and “UserSurname” properties for each account. The domain is specified by passing its FQDN as an argument, and the results are then sent to stdout. The assembly being executed within a Cobalt Strike beacon is shown in Figure 1.

Figure 1: Using the “execute-assembly” command within a Cobalt Strike beacon.

Simple enough, now let’s take a look at how this technique works under the hood.

How “execute-assembly” Works

In order to discover more about how the “execute-assembly” command works, the execution performed in Figure 1 was repeated with the host running ProcMon. The results of the process tree from ProcMon after execution are shown in Figure 2.

Figure 2: Process tree from ProcMon after executing “execute-assembly” command.

In Figure 2, The “powershell.exe (2792)” process contains the beacon, while the “rundll32.exe (2708)” process is used to load and execute the “get-users” assembly. Note that “powershell.exe” is shown as the parent process of “rundll32.exe” in this example because the Cobalt Strike beacon was launched by using a PowerShell one-liner; however, nearly any process can be used to host a beacon by leveraging various process migration techniques. From this information, we can determine that the “execute-assembly” command is similar to other Cobalt Strike post-exploitation jobs. In Cobalt Strike, some functions are offloaded to new processes, in order to ensure the stability of the beacon. The rundll32.exe Windows binary is used by default, although this setting can be changed. In order to migrate the necessary code into the new process, the CreateRemoteThread function is used. We can confirm that this function is utilized by monitoring the host with Sysmon while the “execute-assembly” command is performed. The event generated by the use of the CreateRemoteThread function is shown in Figure 3.

Figure 3: CreateRemoteThread Sysmon event, created after performing the “execute-assembly” command.

More information about this event is shown in Figure 4.

Figure 4: Detailed information about the Sysmon CreateRemoteThread event shown in Figure 3.

In order to execute the provided assembly, the Common Language Runtime (CLR) must be loaded into the newly created process. From the ProcMon logs, we can determine the exact DLLs that are loaded during this step. A portion of these DLLs are shown in Figure 5.

Figure 5: Example of DLLs loaded into rundll32 for hosting the CLR.

In addition, DLLs loaded into the rundll32 process include those necessary for the get-users assembly, such as those for LDAP communication and Kerberos authentication. A portion of these DLLs are shown in Figure 6.

Figure 6: Example of DLLs loaded into rundll32 for Kerberos authentication.

The ProcMon logs confirm that the provided assembly is never written to disk, making the “execute-assembly” command an entirely in-memory attack.

Detecting and Preventing “execute-assembly”

There are several ways to protect against the “execute-assembly” command. As previously detailed, because the technique is a post-exploitation job in Cobalt Strike, it uses the CreateRemoteThread function, which is commonly detected by EDR solutions. However, it is possible that other implementations of BYOL techniques would not require the use of the CreateRemoteThread function.

The “execute-assembly” technique makes use of the native LoadImage function in order to load the provided assembly. The CLRGuard project hooks into the use of this function, and prevents its execution. An example of CLRGuard preventing the execution of the “execute-assembly” command is shown in Figure 7.

Figure 7: CLRGuard blocking the execution of the “execute-assembly” technique.

The resulting error is shown on the Cobalt Strike teamserver in Figure 8.

Figure 8: Error shown in Cobalt Strike when “execute-assembly” is blocked by CLRGuard.

While CLRGuard is effective at preventing the “execute-assembly” command, as well as other BYOL techniques, it is likely that blocking all use of the LoadImage function on a system would negatively impact other benign applications, and is not recommended for production environments.

As with almost all security issues, baselining and correlation is the most effective means of detecting this technique. Suspicious events to correlate could include the use of the LoadImage function by processes that do not typically utilize it, and unusual DLLs being loaded into processes.

Advantages of BYOL

Due to the prevalent use of PowerShell scripts by sophisticated attackers, detection of malicious PowerShell activity has become a primary focus of current detection methodology. In particular, version 5 of PowerShell allows for the use of Script Block Logging, which is capable of recording exactly what PowerShell scripts are executed by the system, regardless of obfuscation techniques. In addition, Constrained Language mode can be used to restrict PowerShell functionality. While bypasses exist for these protections, such as PowerShell downgrade attacks, each bypass an attacker attempts is another event that a defender can trigger off of. BYOL allows for the execution of attacks normally performed by PowerShell scripts, while avoiding all potential PowerShell-based alerts entirely.

PowerShell is not the only native binary whose malicious use is being tracked by defenders. Other common binaries that can generate alerts on include WMIC, schtasks/at, and reg. The functionality of all these tools can be replicated within custom .NET assemblies, due to the flexibility of C# code. By being able to perform the same functionality as these tools without using them, alerts that are based on their malicious use are rendered ineffective.

Finally, thanks to the use of reflective loading, the BYOL technique can be performed entirely in-memory, without the need to write to disk.


BYOL presents a powerful new technique for red teamers to remain undetected during their engagements, and can easily be used with Cobalt Strike’s “execute-assembly” command. In addition, the use of C# assemblies can offer attackers more flexibility than similar PowerShell scripts can afford. Detections for CLR-based techniques, such as hooking of functions used to reflectively load assemblies, should be incorporated into defensive methodology, as these attacks are likely to become more prevalent as detections for LotL techniques mature.


Special thanks to Casey Erikson, who I have worked closely with on developing C# assemblies that leverage this technique, for his contributions to this blog.

A Totally Tubular Treatise on TRITON and TriStation


In December 2017, FireEye's Mandiant discussed an incident response involving the TRITON framework. The TRITON attack and many of the publicly discussed ICS intrusions involved routine techniques where the threat actors used only what is necessary to succeed in their mission. For both INDUSTROYER and TRITON, the attackers moved from the IT network to the OT (operational technology) network through systems that were accessible to both environments. Traditional malware backdoors, Mimikatz distillates, remote desktop sessions, and other well-documented, easily-detected attack methods were used throughout these intrusions.

Despite the routine techniques employed to gain access to an OT environment, the threat actors behind the TRITON malware framework invested significant time learning about the Triconex Safety Instrumented System (SIS) controllers and TriStation, a proprietary network communications protocol. The investment and purpose of the Triconex SIS controllers leads Mandiant to assess the attacker's objective was likely to build the capability to cause physical consequences.

TriStation remains closed source and there is no official public information detailing the structure of the protocol, raising several questions about how the TRITON framework was developed. Did the actor have access to a Triconex controller and TriStation 1131 software suite? When did development first start? How did the threat actor reverse engineer the protocol, and to what extent? What is the protocol structure?

FireEye’s Advanced Practices Team was born to investigate adversary methodologies, and to answer these types of questions, so we started with a deeper look at the TRITON’s own Python scripts.


  • TRITON – Malware framework designed to operate Triconex SIS controllers via the TriStation protocol.
  • TriStation – UDP network protocol specific to Triconex controllers.
  • TRITON threat actor – The human beings who developed, deployed and/or operated TRITON.

Diving into TRITON's Implementation of TriStation

TriStation is a proprietary network protocol and there is no public documentation detailing its structure or how to create software applications that use TriStation. The current TriStation UDP/IP protocol is little understood, but natively implemented through the TriStation 1131 software suite. TriStation operates by UDP over port 1502 and allows for communications between designated masters (PCs with the software that are “engineering workstations”) and slaves (Triconex controllers with special communications modules) over a network.

To us, the Triconex systems, software and associated terminology sound foreign and complicated, and the TriStation protocol is no different. Attempting to understand the protocol from ground zero would take a considerable amount of time and reverse engineering effort – so why not learn from TRITON itself? With the TRITON framework containing TriStation communication functionality, we pursued studying the framework to better understand this mysterious protocol. Work smarter, not harder, amirite?

The TRITON framework has a multitude of functionalities, but we started with the basic components:

  • TS_cnames.pyc # Compiled at: 2017-08-03 10:52:33
  • TsBase.pyc # Compiled at: 2017-08-03 10:52:33
  • TsHi.pyc # Compiled at: 2017-08-04 02:04:01
  • TsLow.pyc # Compiled at: 2017-08-03 10:46:51

TsLow.pyc (Figure 1) contains several pieces of code for error handling, but these also present some cues to the protocol structure.

Figure 1: TsLow.pyc function print_last_error()

In the TsLow.pyc’s function for print_last_error we see error handling for “TCM Error”. This compares the TriStation packet value at offset 0 with a value in a corresponding array from TS_cnames.pyc (Figure 2), which is largely used as a “dictionary” for the protocol.

Figure 2: TS_cnames.pyc TS_cst array

From this we can infer that offset 0 of the TriStation protocol contains message types. This is supported by an additional function, tcm_result, which declares type, size = struct.unpack('<HH', data_received[0:4]), stating that the first two bytes should be handled as integer type and the second two bytes are integer size of the TriStation message. This is our first glimpse into what the threat actor(s) understood about the TriStation protocol.

Since there are only 11 defined message types, it really doesn't matter much if the type is one byte or two because the second byte will always be 0x00.

We also have indications that message type 5 is for all Execution Command Requests and Responses, so it is curious to observe that the TRITON developers called this “Command Reply.” (We won’t understand this naming convention until later.)

Next we examine TsLow.pyc’s print_last_error function (Figure 3) to look at “TS Error” and “TS_names.” We begin by looking at the ts_err variable and see that it references ts_result.

Figure 3: TsLow.pyc function print_last_error() with ts_err highlighted

We follow that thread to ts_result, which defines a few variables in the next 10 bytes (Figure 4): dir, cid, cmd, cnt, unk, cks, siz = struct.unpack('<, ts_packet[0:10]). Now things are heating up. What fun. There’s a lot to unpack here, but the most interesting thing is how this piece script breaks down 10 bytes from ts_packet into different variables.

Figure 4: ts_result with ts_packet header variables highlighted

Figure 5: tcm_result

Referencing tcm_result (Figure 5) we see that it defines type and size as the first four bytes (offset 0 – 3) and tcm_result returns the packet bytes 4:-2 (offset 4 to the end minus 2, because the last two bytes are the CRC-16 checksum). Now that we know where tcm_result leaves off, we know that the ts_reply “cmd” is a single byte at offset 6, and corresponds to the values in the TS_cnames.pyc array and TS_names (Figure 6). The TRITON script also tells us that any integer value over 100 is a likely “command reply.” Sweet.

When looking back at the ts_result packet header definitions, we begin to see some gaps in the TRITON developer's knowledge: dir, cid, cmd, cnt, unk, cks, siz = struct.unpack('<, ts_packet[0:10]). We're clearly speculating based on naming conventions, but we get an impression that offsets 4, 5 and 6 could be "direction", "controller ID" and "command", respectively. Values such as "unk" show that the developer either did not know or did not care to identify this value. We suspect it is a constant, but this value is still unknown to us.

Figure 6: Excerpt TS_cnames.pyc TS_names array, which contain TRITON actor’s notes for execution command function codes

TriStation Protocol Packet Structure

The TRITON threat actor’s knowledge and reverse engineering effort provides us a better understanding of the protocol. From here we can start to form a more complete picture and document the basic functionality of TriStation. We are primarily interested in message type 5, Execution Command, which best illustrates the overall structure of the protocol. Other, smaller message types will have varying structure.

Figure 7: Sample TriStation "Allocate Program" Execution Command, with color annotation and protocol legend

Corroborating the TriStation Analysis

Minute discrepancies aside, the TriStation structure detailed in Figure 7 is supported by other public analyses. Foremost, researchers from the Coordinated Science Laboratory (CSL) at University of Illinois at Urbana-Champaign published a 2017 paper titled "Attack Induced Common-Mode Failures on PLC-based Safety System in a Nuclear Power Plant". The CSL team mentions that they used the Triconex System Access Application (TSAA) protocol to reverse engineer elements of the TriStation protocol. TSAA is a protocol developed by the same company as TriStation. Unlike TriStation, the TSAA protocol structure is described within official documentation. CSL assessed similarities between the two protocols would exist and they leveraged TSAA to better understand TriStation. The team's overall research and analysis of the general packet structure aligns with our TRITON-sourced packet structure.

There are some awesome blog posts and whitepapers out there that support our findings in one way or another. Writeups by Midnight Blue Labs, Accenture, and US-CERT each explain how the TRITON framework relates to the TriStation protocol in superb detail.

TriStation's Reverse Engineering and TRITON's Development

When TRITON was discovered, we began to wonder how the TRITON actor reverse engineered TriStation and implemented it into the framework. We have a lot of theories, all of which seemed plausible: Did they build, buy, borrow, or steal? Or some combination thereof?

Our initial theory was that the threat actor purchased a Triconex controller and software for their own testing and reverse engineering from the "ground up", although if this was the case we do not believe they had a controller with the exact vulnerable firmware version, else they would have had fewer problems with TRITON in practice at the victim site. They may have bought or used a demo version of the TriStation 1131 software, allowing them to reverse engineer enough of TriStation for the framework. They may have stolen TriStation Python libraries from ICS companies, subsidiaries or system integrators and used the stolen material as a base for TriStation and TRITON development. But then again, it is possible that they borrowed TriStation software, Triconex hardware and Python connectors from government-owned utility that was using them legitimately.

Looking at the raw TRITON code, some of the comments may appear oddly phrased, but we do get a sense that the developer is clearly using many of the right vernacular and acronyms, showing smarts on PLC programming. The TS_cnames.pyc script contains interesting typos such as 'Set lable', 'Alocate network accepted', 'Symbol table ccepted' and 'Set program information reponse'. These appear to be normal human error and reflect neither poor written English nor laziness in coding. The significant amount of annotation, cascading logic, and robust error handling throughout the code suggests thoughtful development and testing of the framework. This complicates the theory of "ground up" development, so did they base their code on something else?

While learning from the TriStation functionality within TRITON, we continued to explore legitimate TriStation software. We began our search for "TS1131.exe" and hit dead ends sorting through TriStation DLLs until we came across a variety of TriStation utilities in MSI form. We ultimately stumbled across a juicy archive containing "Trilog v4." Upon further inspection, this file installed "TriLog.exe," which the original TRITON executable mimicked, and a couple of supporting DLLs, all of which were timestamped around August 2006.

When we saw the DLL file description "Tricon Communications Interface" and original file name "TricCom.DLL", we knew we were in the right place. With a simple look at the file strings, "BAZINGA!" We struck gold.

File Name






Compile Date


File Description

Tricon Communications Interface

Product Name

TricCom Dynamic Link Library

File Version


Original File Name



Copyright © 1993-2006 Triconex Corporation

The tr1com40.DLL is exactly what you would expect to see in a custom application package. It is a library that helps support the communications for a Triconex controller. If you've pored over TRITON as much as we have, the moment you look at strings you can see the obvious overlaps between the legitimate DLL and TRITON's own TS_cnames.pyc.

Figure 8: Strings excerpt from tr1com40.DLL

Each of the execution command "error codes" from TS_cnames.pyc are in the strings of tr1com40.DLL (Figure 8). We see "An MP has re-educated" and "Invalid Tristation I command". Even misspelled command strings verbatim such as "Non-existant data item" and "Alocate network accepted". We also see many of the same unknown values. What is obvious from this discovery is that some of the strings in TRITON are likely based on code used in communications libraries for Trident and Tricon controllers.

In our brief survey of the legitimate Triconex Corporation binaries, we observed a few samples with related string tables.


Compile Date

Reference CPP Strings Code



$Workfile:   LAGSTRS.CPP  $ $Modtime:   Jul 21 1999 17:17:26  $ $Revision:   1.0



$Workfile:   TR1STRS.CPP  $ $Modtime:   May 16 2006 09:55:20  $ $Revision:   1.4



$Workfile:   LAGSTRS.CPP  $ $Modtime:   Jul 21 1999 17:17:26  $ $Revision:   1.0



$Workfile:   TR1STRS.CPP  $ $Modtime:   May 16 2006 09:55:20  $ $Revision:   1.4



$Workfile:   LAGSTRS.CPP  $ $Modtime:   Jul 21 1999 17:17:26  $ $Revision:   1.0 



$Workfile:   TR1STRS.CPP  $ $Modtime:   May 16 2006 09:55:20  $ $Revision:   1.4



$Workfile:   LAGSTRS.CPP  $ $Modtime:   Jul 21 1999 17:17:26  $ $Revision:   1.0



$Workfile:   TR1STRS.CPP  $ $Modtime:   May 16 2006 09:55:20  $ $Revision:   1.4

We extracted the CPP string tables in TR1STRS and LAGSTRS and the TS_cnames.pyc TS_names array from TRITON, and compared the 210, 204, and 212 relevant strings from each respective file.

TS_cnames.pyc TS_names and tr1com40.dll share 202 of 220 combined table strings. The remaining strings are unique to each, as seen here:

TS_cnames.TS_names (2017 pyc)

Tr1com40.dll (2006 CPP)

Go to DOWNLOAD mode


Not set



Bad message from module


Bad message type


Bad TMI version number


Module did not respond


Open Connection: Invalid SAP %d


Unsupported message for this TMI version



Wrong command


TS_cnames.pyc TS_names and Tridcom.dll (1999 CPP) shared only 151 of 268 combined table strings, showing a much smaller overlap with the seemingly older CPP library. This makes sense based on the context that Tridcom.dll is meant for a Trident controller, not a Tricon controller. It does seem as though Tr1com40.dll and TR1STRS.CPP code was based on older work.

We are not shocked to find that the threat actor reversed legitimate code to bolster development of the TRITON framework. They want to work smarter, not harder, too. But after reverse engineering legitimate software and implementing the basics of the TriStation, the threat actors still had an incomplete understanding of the protocol. In TRITON's TS_cnames.pyc we saw "Unk75", "Unk76", "Unk83" and other values that were not present in the tr1com40.DLL strings, indicating that the TRITON threat actor may have explored the protocol and annotated their findings beyond what they reverse engineered from the DLL. The gaps in TriStation implementation show us why the actors encountered problems interacting with the Triconex controllers when using TRITON in the wild.

You can see more of the Trilog and Triconex DLL files on VirusTotal.

Item Name





Tricom Communcations DLL


Parent of Tr1com40.dll

Trilog v4.1.360R


RAR Archive of TriLog



Trident Communications DLL


Seeing Triconex systems targeted with malicious intent was new to the world six months ago. Moving forward it would be reasonable to anticipate additional frameworks, such as TRITON, designed for usage against other SIS controllers and associated technologies. If Triconex was within scope, we may see similar attacker methodologies affecting the dominant industrial safety technologies.

Basic security measures do little to thwart truly persistent threat actors and monitoring only IT networks is not an ideal situation. Visibility into both the IT and OT environments is critical for detecting the various stages of an ICS intrusion. Simple detection concepts such as baseline deviation can provide insight into abnormal activity.

While the TRITON framework was actively in use, how many traditional ICS “alarms” were set off while the actors tested their exploits and backdoors on the Triconex controller? How many times did the TriStation protocol, as implemented in their Python scripts, fail or cause errors because of non-standard traffic? How many TriStation UDP pings were sent and how many Connection Requests? How did these statistics compare to the baseline for TriStation traffic? There are no answers to these questions for now. We believe that we can identify these anomalies in the long run if we strive for increased visibility into ICS technologies.

We hope that by holding public discussions about ICS technologies, the Infosec community can cultivate closer relationships with ICS vendors and give the world better insight into how attackers move from the IT to the OT space. We want to foster more conversations like this and generally share good techniques for finding evil. Since most of all ICS attacks involve standard IT intrusions, we should probably come together to invent and improve any guidelines for how to monitor PCs and engineering workstations that bridge the IT and OT networks. We envision a world where attacking or disrupting ICS operations costs the threat actor their cover, their toolkits, their time, and their freedom. It's an ideal world, but something nice to shoot for.

Thanks and Future Work

There is still much to do for TRITON and TriStation. There are many more sub-message types and nuances for parsing out the nitty gritty details, which is hard to do without a controller of our own. And although we’ve published much of what we learned about the TriStation here on the blog, our work will continue as we continue our study of the protocol.

Thanks to everyone who did so much public research on TRITON and TriStation. We have cited a few individuals in this blog post, but there is a lot more community-sourced information that gave us clues and leads for our research and testing of the framework and protocol. We also have to acknowledge the research performed by the TRITON attackers. We borrowed a lot of your knowledge about TriStation from the TRITON framework itself.

Finally, remember that we're here to collaborate. We think most of our research is right, but if you notice any errors or omissions, or have ideas for improvements, please spear phish contact:

Recommended Reading

Appendix A: TriStation Message Type Codes

The following table consists of hex values at offset 0 in the TriStation UDP packets and the associated dictionary definitions, extracted verbatim from the TRITON framework in library TS_cnames.pyc.

Value at 0x0

Message Type


Connection Request


Connection Response


Disconnect Request


Disconnect Response


Execution Command


Ping Command


Connection Limit Reached


Not Connected


MPS Are Dead


Access Denied


Connection Failed

Appendix B: TriStation Execution Command Function Codes

The following table consists of hex values at offset 6 in the TriStation UDP packets and the associated dictionary definitions, extracted verbatim from the TRITON framework in library TS_cnames.pyc.

Value at 0x6

TS_cnames String


0: 'Start download all',


1: 'Start download change',


2: 'Update configuration',


3: 'Upload configuration',


4: 'Set I/O addresses',


5: 'Allocate network',


6: 'Load vector table',


7: 'Set calendar',


8: 'Get calendar',


9: 'Set scan time',


10: 'End download all',


11: 'End download change',


12: 'Cancel download change',


13: 'Attach TRICON',


14: 'Set I/O address limits',


15: 'Configure module',


16: 'Set multiple point values',


17: 'Enable all points',


18: 'Upload vector table',


19: 'Get CP status ',


20: 'Run program',


21: 'Halt program',


22: 'Pause program',


23: 'Do single scan',


24: 'Get chassis status',


25: 'Get minimum scan time',


26: 'Set node number',


27: 'Set I/O point values',


28: 'Get I/O point values',


29: 'Get MP status',


30: 'Set retentive values',


31: 'Adjust clock calendar',


32: 'Clear module alarms',


33: 'Get event log',


34: 'Set SOE block',


35: 'Record event log',


36: 'Get SOE data',


37: 'Enable OVD',


38: 'Disable OVD',


39: 'Enable all OVDs',


40: 'Disable all OVDs',


41: 'Process MODBUS',


42: 'Upload network',


43: 'Set lable',


44: 'Configure system variables',


45: 'Deconfigure module',


46: 'Get system variables',


47: 'Get module types',


48: 'Begin conversion table download',


49: 'Continue conversion table download',


50: 'End conversion table download',


51: 'Get conversion table',


52: 'Set ICM status',


53: 'Broadcast SOE data available',


54: 'Get module versions',


55: 'Allocate program',


56: 'Allocate function',


57: 'Clear retentives',


58: 'Set initial values',


59: 'Start TS2 program download',


60: 'Set TS2 data area',


61: 'Get TS2 data',


62: 'Set TS2 data',


63: 'Set program information',


64: 'Get program information',


65: 'Upload program',


66: 'Upload function',


67: 'Get point groups',


68: 'Allocate symbol table',


69: 'Get I/O address',


70: 'Resend I/O address',


71: 'Get program timing',


72: 'Allocate multiple functions',


73: 'Get node number',


74: 'Get symbol table',


75: 'Unk75',


76: 'Unk76',


77: 'Unk77',


78: 'Unk78',


79: 'Unk79',


80: 'Go to DOWNLOAD mode',


81: 'Unk81',




83: 'Unk83',


































100: 'Command rejected',


101: 'Download all permitted',


102: 'Download change permitted',


103: 'Modification accepted',


104: 'Download cancelled',


105: 'Program accepted',


106: 'TRICON attached',


107: 'I/O addresses set',


108: 'Get CP status response',


109: 'Program is running',


110: 'Program is halted',


111: 'Program is paused',


112: 'End of single scan',


113: 'Get chassis configuration response',


114: 'Scan period modified',


115: '<115>',


116: '<116>',


117: 'Module configured',


118: '<118>',


119: 'Get chassis status response',


120: 'Vectors response',


121: 'Get I/O point values response',


122: 'Calendar changed',


123: 'Configuration updated',


124: 'Get minimum scan time response',


125: '<125>',


126: 'Node number set',


127: 'Get MP status response',


128: 'Retentive values set',


129: 'SOE block set',


130: 'Module alarms cleared',


131: 'Get event log response',


132: 'Symbol table ccepted',


133: 'OVD enable accepted',


134: 'OVD disable accepted',


135: 'Record event log response',


136: 'Upload network response',


137: 'Get SOE data response',


138: 'Alocate network accepted',


139: 'Load vector table accepted',


140: 'Get calendar response',


141: 'Label set',


142: 'Get module types response',


143: 'System variables configured',


144: 'Module deconfigured',


145: '<145>',


146: '<146>',


147: 'Get conversion table response',


148: 'ICM print data sent',


149: 'Set ICM status response',


150: 'Get system variables response',


151: 'Get module versions response',


152: 'Process MODBUS response',


153: 'Allocate program response',


154: 'Allocate function response',


155: 'Clear retentives response',


156: 'Set initial values response',


157: 'Set TS2 data area response',


158: 'Get TS2 data response',


159: 'Set TS2 data response',


160: 'Set program information reponse',


161: 'Get program information response',


162: 'Upload program response',


163: 'Upload function response',


164: 'Get point groups response',


165: 'Allocate symbol table response',


166: 'Program timing response',


167: 'Disable points full',


168: 'Allocate multiple functions response',


169: 'Get node number response',


170: 'Symbol table response',




























































200: 'Wrong command',


201: 'Load is in progress',


202: 'Bad clock calendar data',


203: 'Control program not halted',


204: 'Control program checksum error',


205: 'No memory available',


206: 'Control program not valid',


207: 'Not loading a control program',


208: 'Network is out of range',


209: 'Not enough arguments',


210: 'A Network is missing',


211: 'The download time mismatches',


212: 'Key setting prohibits this operation',


213: 'Bad control program version',


214: 'Command not in correct sequence',


215: '<215>',


216: 'Bad Index for a module',


217: 'Module address is invalid',


218: '<218>',


219: '<219>',


220: 'Bad offset for an I/O point',


221: 'Invalid point type',


222: 'Invalid Point Location',


223: 'Program name is invalid',


224: '<224>',


225: '<225>',


226: '<226>',


227: 'Invalid module type',


228: '<228>',


229: 'Invalid table type',


230: '<230>',


231: 'Invalid network continuation',


232: 'Invalid scan time',


233: 'Load is busy',


234: 'An MP has re-educated',


235: 'Invalid chassis or slot',


236: 'Invalid SOE number',


237: 'Invalid SOE type',


238: 'Invalid SOE state',


239: 'The variable is write protected',


240: 'Node number mismatch',


241: 'Command not allowed',


242: 'Invalid sequence number',


243: 'Time change on non-master TRICON',


244: 'No free Tristation ports',


245: 'Invalid Tristation I command',


246: 'Invalid TriStation 1131 command',


247: 'Only one chassis allowed',


248: 'Bad variable address',


249: 'Response overflow',


250: 'Invalid bus',


251: 'Disable is not allowed',


252: 'Invalid length',


253: 'Point cannot be disabled',


254: 'Too many retentive variables',




256: 'Unknown reject code'


KWA UFUPI: Kutokana na ukuaji wa teknolojia pamoja na muunganiko wa vitu vingi katika mtandao (IoT) vifaa vingi vya watoto vimekua mhanga mkubwa wa uhalifu mtandao – Hii imepelekea kuchukuliwa kwa hutua mbali mbali za kulinda watoto mitandaoni. Andiko hili lina angazia namna bora ya kulinda vifaa vya watoto vya TEHAMA.

Kumekua na matukio kadhaa yaliyo husisha kuingiliwa kimtandao (kudukuliwa) kwa vifaa vinavyo tumiwa na watoto huku wahalifu mtandao wakiunda program tumishi zenye nia ovu ya kukusanya picha na sauti za watoto.

Mfano, kampuni ya V-Tech ambayo inatengeneza vifaa vya TEHAMA vya watoto Ilipata kudukuliwa na wahalifu mtandao ambapo taarifa nyingi za watoto zilijikuta mikononi mwa wahalifu mtandao.

Shirika la umoja wa mataifa linalo husiana na TEHAMA (ITU) limekua na kampeni maarufu ya Kuwalinda watoto mtandaoni – Child online protection (COP) ambayo imeongezewa nguvu na Kampeni nyingine ya wanausalama mtandao ijulikanayo kama siku ya usalama mtandao “Safer internet day” ambazo kwa pamoja zinatoa msaada ingawa kuna kila sababu kwaa wazazi nao kuchukua hatua kuwalinda watoto wao kimtandao.

Taarifa zinaonyesha wazazi wengi wamekua wakinunua vifaa kama vile – midoli ya kimtandao (smart toys), Vifaa vya kuwafatilia wototo (baby monitors), na vifaa vingine vya kuchezea (high-tech swings na play pads) vyote vikiwa vimeunganishwa katika mitandao.

Ikumbukwe, vifaa hivi vyote pamoja na kuonekana kuwapatia watoto furaha pamoja na kuwaweka wazazi karibu na watoto wao pia vinaongeza hatari kubwa ya kuweza kusababisha uhalifu mtandao kwa watototo – tumeendelea kuwaasa wazazi kua makini kwenye haya.

Wazazi wengi wamekua wakieleza vifaa hivi vimekua vikiwasaidia kuweza kujua hali za watoto wao (Mfano: Kujua joto lao la mwili la mtoto, mapigo ya moyo ya mtoto nakadhalika) huku wakiweza kuwafatilia watoto wao kwa kuwaona kwa ukaribu ingawa wako mbali nao kupitia vifaa hivi vya kisasa – Ni sahihi kua hili si jambo baya kwa mzazi kwani inampa faraja kujua mtoto wake anaendeleaje mda wote hata kama yuko mbali.

Ifahamike kua, wahalifu mtandao wame endelea kuingilia vifaa hivi kwa nia mbali mbali – Wengine wanafatilia tu familia za watu na njia rahisi ni kupitia vifaa hivi vinavyo weza kudukuliwa kirahisi, na wengine ni katika kukusanya tu taarifa za watoto ambazo wamekua wakizitumia vibaya.

DONDOO:  Namna unavyoweza kulinda vifaa hivi vya watoto dhidi ya uhalifu mtandao.

Tafakari kabala ya kununua:Kabla ya mzazi kununua vifaa hivi ni vyema ukajiuliza maswali muhimu – Je, Unaulazima wa kua navyo, vinaathari gani kwenye taarifa za familia, unauwezo wa kuvilinda, vimeunganishwa kwenye mtandao kwa kiasi gani, vimeundwa na nani na vina ulinzi kiasi gani.

Badili neon siri (Nywila) linalo kuja na vifaa hivyo (Default password): Vifaa hivi vya ki TEHAMA vya watoto vinakuja na Maneno siri ambayo wahalifu mtandao mara nyingi wanakua tayari wanayajua au ni rahisi kuya pata – Inashauriwa kama umenunua ni vizuri ukabadili maneno siri hayo na kuweka mengine madhubuti ambayo utakua ukibadili mara kwa mara kama kifaa kitaruhusu ili kulinda vifaa hivyo.

Nunua vifaa vivi kutoka kwenye makampuni yenye sifa (Known brand – with reputation):Kumekua na makampuni mengi ambayo yamekua yakitoa vifaa pamoja na program tumishi zenye nia ovu ya kukusanya taarifa za watoto

Aidha, Kunayo makampuni ambayo yamekua na udhaifu katika kulinda vifaa wanavyo tengeneza kwa ajili ya watoto – Inashauriwa uhakiki unajiweka mbali na aina hizo za makampuni ili usijikute eidha, ulicho nunua kinapelekea taarifa za mtoto (Picha na sauti) kutumiwa vibaya au kampuni inadukuliwa mara kwa mara na kupelekea taarifa za watoto kua hatarini.

Boresha program (Update software):Kama ilivyo kwa program nyingine, panapo gundulika mapungufu watengenezaji hutoa maboresho ambayo yanamtaka mtumiaji kuyaongezea kwenye vifaa wanavyo tumia ili viendelee kua na ulinzi – Kwenye vifaa vya watoto pia inapaswa wazazi wawe na tabia ya kuboresha programu zake kila mara zinapo boreshwa na watengenezaji/ waundaji wa vifaa hivyo.

Zima kama hutumii: Vifaa hivi vinapokua vimezimwa vinapunguza mwanya kwa wahalifu mtandao kuvidukua au kuviathiri, hivyo inashauriwa kama kifaa cha mtoto cha kitehama ukitumii basi kizime – Hii itasaidia kupunguza wimbi la uwezekano wa kudukuliwa au kuingiliwa kwa faragha za watoto na familia kwa ujumla.