Daily Archives: February 11, 2020

Donating BAT to Have I Been Pwned with Brave Browser

Donating BAT to Have I Been Pwned with Brave Browser

I don't know exactly why the recent uptick, but lately I've had a bunch of people ask me if I've tried the Brave web browser. Why they'd ask me that is much more obvious: Brave is a privacy-focused browser that nukes ads and trackers. It also has some cool built-in stuff like the ability to create a new private browsing window in Tor rather than just your classic incognito window that might ditch all your cookies and browsing history but still connect to the internet directly from your own IP address. But the thing that's really caught the attention of the people I've been speaking to is Brave Rewards which is an innovative way of simultaneously eschewing traditional ads whilst still shuffling money towards content creators. It works on the basis of awarding "Basic Attention Tokens" (BAT) based on where people spend their time browsing or choose to donate. I'm a bit passionate about that topic after declaring traditional ad networks to be evil and deciding to ditch them altogether in favour of the sponsorship you see at the top of this blog.

Anyway, yes, I had tried Brave before but no, hadn't really used it to any great extent. And then I got this email on the weekend:

I am not affiliated with Brave, but it is my new goto browser. I wanted to recommend that you set-up your site (HIBP) to allow BAT donations. I think that your community of users would love to have this as an option.

It was 6 years ago now that I first introduced donations to Have I Been Pwned (HIBP) and remain enormously happy with that approach (although admittedly some of those items are a bit out of date). I'm continually amazed at people's willingness to give back via that page so adding the ability to take BAT donations via Brave seemed like a really good idea. As such, if you fire up HIBP in Brave you'll now see the ability to chip in directly from the browser:

Donating BAT to Have I Been Pwned with Brave Browser

And if you're really keen, you can chip in some BAT on a monthly basis:

Donating BAT to Have I Been Pwned with Brave Browser

And as the bloke who originally reached out quite rightly says:

The nice thing about these BAT donations is that they are anonymous and as simple as a click, the best solution I have seen to date.

Which is just great. I'm going to spend a lot more time in Brave because I really love the philosophy of it (that incognito mode over Tor is just great), I hope this post drives others to take a look whether it be just for privacy or to help support HIBP. Brave only takes a few minutes to setup, go and grab it from brave.com

How we fought bad apps and malicious developers in 2019


Posted by Andrew Ahn, Product Manager, Google Play + Android App Safety
[Cross-posted from the Android Developers Blog]

Google Play connects users with great digital experiences to help them be more productive and entertained, as well as providing app developers with tools to reach billions of users around the globe. Such a thriving ecosystem can only be achieved and sustained when trust and safety is one of its key foundations. Over the last few years we’ve made the trust and safety of Google Play a top priority, and have continued our investments and improvements in our abuse detection systems, policies, and teams to fight against bad apps and malicious actors.
In 2019, we continued to strengthen our policies (especially to better protect kids and families), continued to improve our developer approval process, initiated a deeper collaboration with security industry partners through the App Defense Alliance, enhanced our machine learning detection systems analyzing an app’s code, metadata, and user engagement signals for any suspicious content or behaviors, as well as scaling the number and the depth of manual reviews. The combination of these efforts have resulted in a much cleaner Play Store:
  • Google Play released a new policy in 2018 to stop apps from unnecessarily accessing privacy-sensitive SMS and Call Log data. We saw a significant, 98% decrease in apps accessing SMS and Call Log data as developers partnered with us to update their apps and protect users. The remaining 2% are comprised of apps that require SMS and Call Log data to perform their core function.
  • One of the best ways to protect users from bad apps is to keep those apps out of the Play Store in the first place. Our improved vetting mechanisms stopped over 790,000 policy-violating app submissions before they were ever published to the Play Store.
  • Similarly to our SMS and Call Log policy, we also enacted a policy to better protect families in May 2019. After putting this in place, we worked with developers to update or remove tens of thousands of apps, making the Play Store a safer place for everyone.
In addition we’ve launched a refreshed Google Play Protect experience, our built-in malware protection for Android devices. Google Play Protect scans over 100B apps everyday, providing users with information about potential security issues and actions they can take to keep their devices safe and secure. Last year, Google Play Protect also prevented more than 1.9B malware installs from non-Google Play sources.
While we are proud of what we were able to achieve in partnership with our developer community, we know there is more work to be done. Adversarial bad actors will continue to devise new ways to evade our detection systems and put users in harm's way for their own gains. Our commitment in building the world's safest and most helpful app platform will continue in 2020, and we will continue to invest in the key app safety areas mentioned in last year’s blog post:
  • Strengthening app safety policies to protect user privacy
  • Faster detection of bad actors and blocking repeat offenders
  • Detecting and removing apps with harmful content and behaviors
Our teams of passionate product managers, engineers, policy experts, and operations leaders will continue to work with the developer community to accelerate the pace of innovation, and deliver a safer app store to billions of Android users worldwide.

Managed Defense: The Analytical Mindset

When it comes to cyber security (managed services or otherwise), you’re ultimately reliant on analyst expertise to keep your environment safe. Products and intelligence are necessary pieces of the security puzzle to generate detection signal and whittle down the alert chaff, but in the end, an analyst’s trained eyes and investigative process are the deciding factors in effectively going from alerts to answers in your organization.

This blog post highlights the events of a recent investigation by FireEye Managed Defense to showcase the investigative tooling and analysis process of our analysts.

Threat Overview

Recently, FireEye Managed Defense responded to a suspected China-nexus threat group campaign targeting the transportation, construction, and media sectors in Southeast Asia. FireEye’s investigative findings uncovered previously unseen malware, DUOBEAN, a backdoor that solicits additional modules from command-and-control (C2) infrastructure and injects them into process memory.

Initial Lead

Our initial lead for this activity originated from threat hunting in Managed Defense, which identified a ZIP archive containing a malicious LNK file with embedded PowerShell commands to download and inject a malicious payload into victim process memory. The attachment was blocked by a FireEye ETP appliance in Southeast Asia, but network indicators for the payload were extracted for monitoring suspicious infrastructure.

When IP addresses are tasked for monitoring, our network sensors record traffic observed to the suspicious destination for further analysis by our Managed Defense team during threat hunting activities. When new leads from monitored traffic have been collected, our analysts use an internal tool, MDASH, as a dashboard for exploring suspicious network activity.

Analyst Perspective

With mountains of evidence available from endpoint telemetry and network traffic, it’s critical to interrogate artifacts with purposeful lines of questioning in order to respond to threat actor activity as effectively as possible without getting lost in the data.

In this engagement, we have the initial lead for DUOBEAN activity being a tracked IP address that has generated a lead for hunting. Given this type of evidence, there’s a few questions we’re interested in answering before looking at the PCAP contents.

Why did we start monitoring this indicator?

The most important action an analyst can take when evaluating any indicator is understanding what it is trying to detect. For FireEye, the monitored network infrastructure is commented by the author to provide necessary context for analysts that review generated leads.

In this case, our team identified that a recent sample of CHAINLNK from a blocked ETP attachment in Southeast Asia beaconed to infrastructure serving the same SSL certificate. Related infrastructure reusing SSL certificates were enumerated when a malicious domain was gathered from the payload and scoped using PassiveTotal to identify SSL certificates associated with the IP. Certificate SHA-1 was then searched against PassiveTotal results to identify an additional network asset serving the same certificate. This overlapping certificate use is illustrated in Figure 1.


Figure 1: Suspicious infrastructure observed in hunting activity

How long have we been tracking this IP Address?

IP addresses can be some of the most volatile indicators in the world of security. The operational cost for an attacker to transition infrastructure is nominal, so the accuracy of the indicator will decrease as time marches on.

In this instance, the IP address had only been monitored for seven (7) days which increased the credibility of the indicator given the relative freshness.

What’s the prevalence of this activity?

Prevalence of traffic to an IP address gives us a baseline for normalcy. Large volumes of traffic from multiple varying hosts in multiple organizations changes our frame of reference to be less suspicious about the activity, while traffic from a few consistent internal hosts at one or few clients would be more consistent with targeted attacker activity.

In this engagement, we observed six (6) hosts from one organization making consistent HTTPS requests (without response) to the infrastructure. This limited scope would be consistent with more suspicious activity.

How frequently is activity being observed?

Frequency of traffic informs an analyst of whether the activity is programmatic or interactive. Identical activity at consistent intervals is not something humans can easily replicate. Although malware regularly uses variable lengths of time for beaconing, consistent outbound requests in cadence are telling us that some programmatic task is occurring to generate the activity, not a user session.

In this engagement, we observed outbound traffic occurring from all six (6) hosts at 15 minute intervals which was indicative of programmatic activity initiating the requests.

How much information is being passed between these hosts?

Strictly looking at netflow information, the byte size and directionality of the traffic will also inform your analysis on what you’re observing. Small consistently sized outbound packets tends to be more representative of beaconing traffic (legitimate or otherwise), while varied request/response sizes with frequency communication suggests interactivity.

In this engagement, we observed only a few bytes of outbound traffic on each of the hosts, consistent with beaconing.

Without looking at the packets, our line of questioning against the flow data already begins to characterize the content as highly suspicious. Looking at the network capture content (Figure 2), we observe that the outbound traffic gathered is strictly TLS Client Hello traffic to a free domain, which are commonly employed by attackers.


Figure 2: TLS Client Hello from packet capture

Given the findings from the hunting investigation, the Managed Defense team immediately informed the customer that further endpoint analysis was going to be performed on the six (6) host communicating with the suspicious infrastructure. At the time, the customer was not instrumented with FireEye Endpoint Security, so portable collections were captured for each of the hosts and securely uploaded to the Managed Defense team for analysis.

Further Analysis

Endpoint collections containing Windows file system metadata, Windows Registry, Windows Event Logs, web browser history, and a process listing with active network connections were gathered for Managed Defense analysts.

Windows Event Logs by themselves can have hundreds of thousands if not millions of entries. As an analyst, it’s increasingly important to be specific in what questions you’re looking to answer during endpoint investigations. In this case, we have one leading question to begin our investigation: What application is regularly communicating with our suspicious infrastructure?

Active network connections indicated that legitimate Windows binary, “msiexec.exe”, was responsible for the network connection to the suspicious infrastructure. This information was also included in detailed process tracking evidence (EID 4688) from Windows Event Logs listed in Figure 3.


Figure 3: Windows Event Log detailing suspicious use of “msiexec.exe”

The legitimate application “msiexec.exe”, is responsible for command-line installation and modification of Windows Installer applications (*.msi files), and rarely makes network connections. From an analyst’s perspective, the low occurrence of network activity in standard use from this binary elicits suspicions of process injection. The parent process in this instance is also in a minimally privileged %AppData%\Roaming directory commonly used for malware persistence. 

As an analyst, we’re confident at this point that malicious activity is occurring on the host. Our line of questioning now transitions from exploring the source of network traffic to discovering the scope of the compromise on the host. To triage, we will use the following line of questioning:

What is it?

For this question, we’re interested in understanding the attacker behavior on the victim computer, specifically the malware in this investigation. This includes functionality and persistence mechanisms used.

With our initial lead being the potential staging directory of %AppData%\Roaming from the Windows Event Log listing, we’ll first look at any files created within a few minutes of “eeclnt.exe”. A Mandiant Redline listing of the files returned from filtering the directory is shown in Figure 4.


Figure 4: Mandiant Redline file listing from potential staging directory, %Appdata%\Roaming

Three (3) suspicious files in question are returned “eeclnt.exe”, “MSVCR110.dll”, and “MSVCR110.dat”. These files are uploaded to the FLARE team’s internal malware sandbox, Horizon, for further analysis.

PE File information indicates that “eeclnt.exe” is a legitimate copy of the ESET Smart Security binary with a required import of “MSVCR110.dll”. “MSVCR110.dll” supplementary library required for applications developed with Microsoft Visual C++. In this case, “MSVCR110.dll” was replaced with a malicious loader DLL. When “eeclnt.exe” executes, it imports the malicious DLL “MSVCR110.dll”, which loads the backdoor contained in “MSVCR110.dat” into “msiexec.exe” process memory through process hollowing. This technique is called “sideloading” and is commonly used by attackers to evade detection by using legitimate executables to run malicious code.

After initial triage from a Managed Defense analyst, the backdoor was passed along to our FLARE team to reverse engineer for additional identification of malware functionality and family identification. In this case, the backdoor was previously unseen so the Managed Defense analyst who identified the malware named it DUOBEAN.

How does it persist?

On Windows hosts, malware normally persists in one of three ways: Registry “Run” keys that run a specific application anytime a specific user (in some cases any user) authenticate into the workstation. Windows Services, long-standing background processes typically started at machine boot; and scheduled tasks that run an arbitrary command or binary at a designated interval.

In this case, by filtering for the sideloaded binary, “eeclnt.exe”, we quickly identified a Windows Service, “Software Update”, created around the file creation timestamp that maintained persistence for the DUOBEAN backdoor.

How did it get there?

This can be one of the more challenging questions to answer in the investigative world. With limited data retention times and rolling log data, the initial vector is not always easily discerned.

In this case, pivoting to look at browser history and file system modification around the time the DUOBEAN backdoor was created on the victim endpoint led us to our answers. Mandiant Redline output to detail the timeline of initial compromise is displayed in Figure 5.


Figure 5: Mandiant Redline output containing the host initial compromise timeline

The timeline of events shows that the user was phished from their personal Gmail, opening the password protected CHAINLNK attachment delivered from a OneDrive link embedded in the email. Malicious PowerShell commands observed from Windows Event Logs contained in Figure 6 following the activity indicate that CHAINLNK successfully executed and downloaded DUOBEAN.


Figure 6: Malicious CHAINLNK PowerShell commands observed in Windows Event Logs

No further activity was identified from this host based on the investigative evidence provided, and Managed Defense continued to scope the environment for additional indicators of compromise. This specific threat actor was detected early in the attack lifecycle which limited the impact of the threat actor and enabled Managed Defense to guide the victim organization through a quick remediation.

Summary

The China-nexus threat actor activity detailed above expanded to multiple customers, and eventually escalated to a Managed Defense Community Protection Event (CPE). CPEs are rapidly progressing campaigns targeting multiple customers with substantial potential for business impact. Managed Defense customers are immediately notified of CPE activity, indicators are deployed to monitor customer products, and the Managed Defense Consulting team provides insight on how to mitigate risk.

Regardless of the scale of your investigation, time is of the essence. Drowning under investigative data without a clear line of questioning buys attackers additional time to impose their agenda on your organization. Remember, products and intelligence are components of your security practice, but expertise is required in order to transform those inputs into an effective response.

NICE Director Participates as Witness During U.S. House of Representatives Committee on Science, Space, and Technology Hearing

Rodney Petersen, the Director of the National Initiative for Cybersecurity Education (NICE), participated as a witness during the House Science Subcommittee on Research and Technology hearing on “More Hires, Fewer Hacks: Developing the U.S. Cybersecurity Workforce”. Rodney shared information on NIST’s efforts to energize and promote a robust network and an ecosystem of cybersecurity education, training, and workforce development through the NICE program. Other topics were also highlighted such as the Regional Alliances and Multistakeholder Partnerships to Stimulate (RAMPS) Cybersecurity

5 Signs You Have a Bad VPN

How do you choose a VPN? What is the basic criterion? Price? Coverage? Quality? What to consider choosing the best VPN and how to find out it is the right one?

There are two main reasons you might want to have a VPN. We are talking about privacy and access to restricted websites by region. But these are not the only features you have to follow. There are a few other things you have to pay attention to in order to pick the right VPN for your needs and cooltechzone.com will help you to do it right.

Why Your VPN Is Not Working for You

If you already use a well-known VPN service that seems to be great for most devices out there except yours, you have picked the wrong service. If it has positive feedback and a good reputation on tech sources, this still does not mean it is going to suit your needs too.

That is a huge mistake to pick a VPN by reviews only. First of all, analyze your personal needs as an individual Internet user. What do you expect? What kind of technical requirements do you have? Where and when are you going to use a service? Is it about work only? Or maybe you need it abroad tips specifically? That is how it works.

vpn services 2

So, here are five signs you have picked the wrong service.

  1. The number of MB provided by the service per month is not enough for you. Usually, it is a common problem for those, who have chosen VPN free services without regular payment. Many free VPN services have a limited number of MB a user can get per month. If you need more MB, you have to pay for them. In case those free MB end too fast, change the service.
  2. You notice the Internet connection has become worse. A good VPN is the one that does not influence the connection quality. If it is different, the problem is definitely with the service. This is not suitable if your work depends on the Internet.
  3. It is hard to set a connection between all of your devices and service. A good VPN can have up to six devices connected. In case you cannot have that, maybe it’s time to change the service. Search for VPNs that can work with mobile devices, desktop, PC, and even Chrome.
  4. If you can connect just to one or two servers in a country, you need to change a VPN. The number of servers influences so many things. It even influences the speed. Search for the servers that are the closest ones by your current location. When looking for a VPN, always count those servers. Decide the websites and countries you want to connect to in the future.
  5. If you cannot connect to a VPN support team directly to get the answers to any questions or issues you have, change the service.

Now you know the criteria of a good VPN. What do you think about your service? Is it good enough?

The post 5 Signs You Have a Bad VPN appeared first on .

Sharenting, BYOD and Kids Online: 10 Digital Tips for Modern Day Parents

Sharenting, BYOD and Kids Online: 10 Digital Tips for Modern Day Parents

Today is Safer Internet Day which marks the annual occurrence of parents thinking about their kids' online presence (before we go back to thinking very little about it tomorrow!) It's also the day the Courier-Mail here in my home state of Queensland published a piece on sharenting or as Wikipedia more accurately describes it, the practice of "sharing too much information" about your kids online. That's a worthy discussion to have on this day, although the opening paragraph started out, well... just read it:

I was invited into the local ABC Radio studio to comment on this piece and online safety in general so in a very meta way, I took my 7-year old daughter with me and captured this pic which, after discussion with her, I'm sharing online:

Sharenting, BYOD and Kids Online: 10 Digital Tips for Modern Day Parents

Discussion quickly went from sharenting to BYOD at schools to parental controls and all manner of kid-related cyber things. Having just gone through the BYOD process with my 10-year old son at school (and witnessing the confusion and disinformation from parents and teachers alike), now seemed like a good time to outline some fundamentals whilst sitting on a plane heading down to Sydney to do some adult-related cyber things!

And just a side-note before I jump into those fundamentals: I had a quick flick through the government's eSafety guidance for children under 5 whilst on the plane and it has a bunch of really good stuff. The opening para in the newspaper doesn't do justice to the sentiment of the document which was more around helping kids understand that there are different views to how each of us likes to have our images shared and frankly, that's a good discussion to have. I wanted to make sure that was captured here and it what seems like an otherwise pretty outlandish statement doesn't detract from the gov's paper.

1. Privacy is Personal

The obvious place to start is to recognise that views regarding privacy are a very personal thing. My views are shaped by a life that's very public due to the nature of what I do and as such, my kids receive more exposure than most (the picture above, for example). I know of other parents who adamantly don't want any trace of their kids on the internet whatsoever. On the other hand, we've all seen "insta" mums / dads who, in my view, go way overboard in the other direction hence the whole sharenting thing.

In any of these debates, the extreme ends of the spectrum tend to be just that - extreme - and the more common-sense approach is the middle ground. Yes, I occasionally post pics of my kids, but it probably amounts to once every couple of months and it's in the context of my profession and projects them in a healthy, positive light. For example, my son teaching kids to code in London a couple of weeks ago:

Nothing bad will happen to him following this photo. He won't be emotionally scarred, nor will he grow up feeling like his privacy has been violated. Neither will my daughter when I share a video of her coding at home:

But everyone needs to find their own balance here and dictating that there's one approach to fit all is frankly, pretty nutty. Which brings me to the next point:

2. Parents Set the Boundaries, Not Kids

Everyone I've spoken to today regarding that Courier-Mail intro agrees on one thing: it's ridiculous to ask a 2-year old for "consent". Not only does a child of that age have absolutely no idea what the social ramifications of sharenting are, there's all sorts of things we do as parents and impose on our children that they have little to no say in. Immunisation. Diet. Education. Religion. The list goes on (as the list of things some people now think you should ask your kids permission for...)

To be clear, we're talking about toddlers in that story and primary school children in my personal case and obviously the discussion will change as they mature. I talk more and more to my kids about online life not just related to my profession, but about their own day to day experiences. Over time they'll have more control and the balance of power will gradually shift from parents calling all the shots to the kids doing so. But imagine if you let kids of that age decide for themselves how the parenting should be done; my kids would be non-stop eating pancakes and watching YouTube!

More importantly though, the online presence kids have doesn't need to be an all or nothing affair or an opt-in versus opt-out, let's talk about privacy controls on social media.

3. Use Social Privacy Controls Liberally

On the one hand, it always surprises when people aren't familiar with basic privacy controls. But on the other hand, let's face it: Facebook menus and options aren't exactly the most intuitive. Be that as it may, you have a lot of control over who can see what:

Sharenting, BYOD and Kids Online: 10 Digital Tips for Modern Day Parents

As a rule of thumb with all things security and privacy related, apply the principle of least privilege or in other words, only share things with those who need to see them. Photos of kids, for example, might be something you choose to only share with family members - perhaps even only specific family members - and you have the controls available to do just that. I'm not of the "Delete Facebook" mindset because there's enormous social value in staying connected with family and friends and that often means sharing what's going in your own life as well as that of your kids'.

However, the internet being what it is, all the social privacy controls in the world won't do a bit of good if you screw them up (among other things that can go wrong), so let's talk about happens then.

4. Assume That Everything You Share is Public

I've never posted a naked photo of my kids to Facebook. Not when they were babies, not with privacy controls configured and not with any other precautions taken. The simple reason for this is that unlike when my parents took photos of me, say, playing in the mud as a 2 year old with no clothes on and put it in the family album (yeah, thanks mum and dad), placing such photos in a digital album creates all new risks. Faced with the privacy controls in that Facebook image above, you're one click - one single click - away from publishing images to the world instead of just to your closest confidants. I've done it before myself and if I can't always get this right when I spend my life thinking about security and privacy, you too will probably make the same mistake at some time.

I work on the assumption that regardless of my best efforts with privacy controls, anything I put on social media could one day be public. It's not just the risk of me messing up a setting, there's nothing to stop other people in my network from taking my pictures or my words and distributing them out beyond my control. That's not necessarily to say it would be done with malicious intent, it could be innocent in nature given the first point above about us all having different tolerances to privacy. And yes, we all have the right to privacy and the reasonable expectation that others won't violate it, but no, that's not always consistent with reality.

Lastly, consider all the other risks presented by the social media platforms themselves; Facebook has obviously had major issues relating to privacy in the past. Your data is viewed as a valuable commodity that many social media platforms have monetised without sufficiently informed consent and a bunch of online services then also get themselves hacked. Just picking incidents I've personally been involved with: VTech had kids names and photos breached, CloudPets left kids' voice messages exposed and TicTocTrack had security vulnerabilities that would allow anyone to track the movements of your kid. Proceed with the mindset of "assume breach" and share information accordingly.

5. Admin Rights are not for Children

Onto the whole BYOD thing and with my son now in year 5 and needing his own machine at school, this is a hot button topic for me. The first point here is super easy and for anyone working in the same industry as I do, it's also super obvious: kids should never have admin rights on machines. Admin rights grant them ability to install near anything they'd like (that includes malware, ransomware and remote access trojans), make configuration changes and for all intents and purposes, play "god" with the machine. Most companies don't let adults do that in the workplace, does anyone seriously think it's a good idea for young children?

The thing is that as with corporate life, the practical barriers posed by not having admin rights are very limited. There's usually a small, finite list of tools needed for the student (or employee) to perform the required tasks and they can either be pre-installed or added later by an administrator (AKA, a parent). Most of the time this is going to boil down to Microsoft Office (or equivalent) and a browser. My son has also been using OneNote which I installed from the Microsoft Store and then that's it, he's good to go.

Given the chance, kids will install every piece of crapware they can get their hands on. Chrome extensions seem to be a popular one amongst kids with more liberal access rights. I learned this recently as my son asked if he could install the Predator extension which "gives a clean and modern look to your default Chrome homepage" and has "epic themes". That's right up a 10-year old's alley! However:

Sharenting, BYOD and Kids Online: 10 Digital Tips for Modern Day Parents

This was what they call "a teachable moment" as we discussed what it would mean if the developer (who appears to have a parked domain but was previous a "pragmatic media agency" specialising in "hyperlocal targeting") can access everything in the permissions list above. And then, how much worse could it get?

Never, ever let kids install things on their machines without at least adult supervision and preferably, digital controls enforced by the parent. Which can actually be very easy:

6. Native Digital Parental Controls are Free and Easy

This is another one that parents should absolutely be on top of but frequently aren't. Windows and iOS have parental controls natively baked right into the operating systems. They're the two platforms my kids spend time on but there are also native controls for Android and native controls for macOS.

Right out of the box, I can easily control what the kids can do on the hand-me-down iPhones they have, for example:

Sharenting, BYOD and Kids Online: 10 Digital Tips for Modern Day Parents

They can use the devices only for a limited amount of time and only between limited hours of the day unless the app is "always allowed" (for example, the phone). They can request to download (and purchase) apps but they need parental approval. They can't change their passcodes. Their usage activity can be remotely viewed from a parent's device. This is all free and built into iOS, with similar controls available in the Microsoft ecosystem:

Sharenting, BYOD and Kids Online: 10 Digital Tips for Modern Day Parents

If you're a parent with a kid using one of these devices and you don't have the controls turned on already, start there. There are other third party products on the market which I wouldn't even consider unless there was a very compelling reason to use them over and above the native controls (and no - neither their marketing pitch or a financial deal stitched up with the school are compelling reasons). Never, ever, under any circumstances resort to spyware or as it's frequently known, stalkerware. Any product with the ability to read messages, view photos, log calls or provide similar levels of invasive access is a recipe for disaster. Firstly, there's all the precedents of these services suffering data breaches and leaking the aforementioned content all over the web. Secondly, their use in monitoring children is indistinguishable from their use in abusive relationships, something Eva Galperin from the EFF has been working hard to try and stamp out (with some great success stories too) which means they're less readily available now than what they once were. I still use the digital controls because they're great at doing what they do, but they're far from perfect...

7. Your Children are (Probably) Smarter Than You

I'll just leave this one right here as it's a perfect illustration of the heading:

Now again: if I can't get this right given what I do for a living, what hope is there for your average parent?! And that's just one of many, many ways that kids can skirt around the very controls put in place to limit their access. Not only will they be able to circumvent many of the controls, they'll be able to conceal it too. They'll see gore. They'll see porn. They'll behave in ways that don't align with your own values, just like we did when we were kids. Which is why this next section is so important:

8. Digital Parental Controls Are Not a Substitute for Actual Parenting

Digital parental controls are great at doing the sorts of things mentioned above. They're efficient, automated and pretty much "set and forget". But they don't understand nuances such as what certain content can mean for children or what sort of communication is appropriate within the constraints of the app limits. Digital controls can never replace the role parents play in how the kids use devices; they should be complimentary to parenting rather than a substitute for it.

Having the kids use devices in the living room rather than the bedroom is a perfect example of practical parental oversight. You hear what they're listening to, glance at what they're doing as you walk past and are on hand to answer questions as they come up. I sit with the kids when they do coding exercises (such as the earlier video with my daughter) and check out the homework my son does on his PC. This will inevitably be a pattern that changes as they grow up, form relationships and seek (and deserve) more privacy, but certainly at this age close parental oversight is absolutely invaluable and, I'd argue, essential.

9. It's Not about Screen Time, it's about What They Do on the Screen

As I was writing this up today I flicked my sister in law a draft copy and asked for input. Think of Cathy in an edutech context as you'd think of me in an infosec context; lots of content creation, travel, speaking and thought going into the topic. Plus, she's both a teacher and a mother of kids a similar age to mine so her opinion holds a lot of weight in my book.

She emphasised today (as she has many times in the past), that the amount of time spent on the screen isn't the issue, rather it's the activities the kids are involved in. Screen time can be mind-numbing, un-constructive, "chewing gum for the brain", as they say. It can also be thought-provoking, creative and educational and I'm sure we can all relate to examples at both ends of the extreme. She shared a New York Times piece from December titled Is Screen Time Really Bad for Kids which contains this great observation:

A screen-related activity may be beneficial or harmful depending on who is doing it, how much they’re doing it, when they’re doing it and what they’re not doing instead

I suspect that at some level we all know this but often don't recognise the parental involvement required to separate those harmful from beneficial activities. I certainly need to work on that more and I recognise that one of my early failing as a "digital parent" was to allow my son to focus more on "time spent on screen" than "activities performed on screen". Recently, the penny dropped that he was viewing the allowable screen time as a goal rather than a limit; "but I haven't even reached my target time yet!" Screen time now involves him asking to use the device and discussing what he's going to do on it which so far, is working out much better.

10. Technology is a Joyous Thing to be Shared with Your Kids

Every day, I marvel at the things we can do with the consumer devices we have at our disposal. I mean I seriously stop and think about how awesome it is and reflect back to when I was my kids' ages and what I had then. Rotary dial telephones. Fax machines. Floppy disks (if you could afford a computer). I want them learning how to use technology to the fullest extent possible as part of both their social and professional development and I want them to learn how to use it well.

Occasionally after posting a video of the kids coding or similar, I'll get comments along the lines of "let kids be kids, they should be outside playing". What a ridiculous statement! It's one of those extreme points of view I mentioned earlier and it assumes that firstly, screen time is a mutually exclusive thing to physical exercise and secondly, that it somehow detracts from a child's development rather than enhances it. This view today is no more insane than the view paleolithic parents had about limited their children's access to fire time (ok, not really, but it's a fun read 🙂)

Tech is fun. Understand how it works, set boundaries, find a healthy balance and have a laugh with your kids. Speaking of having a laugh: